Data Scientist Nanodegree

Convolutional Neural Networks

Project: Write an Algorithm for a Dog Identification App

This notebook walks you through one of the most popular Udacity projects across machine learning and artificial intellegence nanodegree programs. The goal is to classify images of dogs according to their breed.

If you are looking for a more guided capstone project related to deep learning and convolutional neural networks, this might be just it. Notice that even if you follow the notebook to creating your classifier, you must still create a blog post or deploy an application to fulfill the requirements of the capstone project.

Also notice, you may be able to use only parts of this notebook (for example certain coding portions or the data) without completing all parts and still meet all requirements of the capstone project.


In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with '(IMPLEMENTATION)' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!

In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.

The rubric contains optional "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this IPython notebook.


Why We're Here

In this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!).

Sample Dog Output

In this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience!

The Road Ahead

We break the notebook into separate steps. Feel free to use the links below to navigate the notebook.

  • Step 0: Import Datasets
  • Step 1: Detect Humans
  • Step 2: Detect Dogs
  • Step 3: Create a CNN to Classify Dog Breeds (from Scratch)
  • Step 4: Use a CNN to Classify Dog Breeds (using Transfer Learning)
  • Step 5: Create a CNN to Classify Dog Breeds (using Transfer Learning)
  • Step 6: Write your Algorithm
  • Step 7: Test Your Algorithm

Step 0: Import Datasets

Import Dog Dataset

In the code cell below, we import a dataset of dog images. We populate a few variables through the use of the load_files function from the scikit-learn library:

  • train_files, valid_files, test_files - numpy arrays containing file paths to images
  • train_targets, valid_targets, test_targets - numpy arrays containing onehot-encoded classification labels
  • dog_names - list of string-valued dog breed names for translating labels
In [1]:
from sklearn.datasets import load_files       
from keras.utils import np_utils
import numpy as np
from glob import glob

# define function to load train, test, and validation datasets
def load_dataset(path):
    data = load_files(path)
    dog_files = np.array(data['filenames'])
    dog_targets = np_utils.to_categorical(np.array(data['target']), 133)
    return dog_files, dog_targets

# load train, test, and validation datasets
train_files, train_targets = load_dataset('../../../data/dog_images/train')
valid_files, valid_targets = load_dataset('../../../data/dog_images/valid')
test_files, test_targets = load_dataset('../../../data/dog_images/test')

# load list of dog names
dog_names = [item[20:-1] for item in sorted(glob("../../../data/dog_images/train/*/"))]

# print statistics about the dataset
print('There are %d total dog categories.' % len(dog_names))
print('There are %s total dog images.\n' % len(np.hstack([train_files, valid_files, test_files])))
print('There are %d training dog images.' % len(train_files))
print('There are %d validation dog images.' % len(valid_files))
print('There are %d test dog images.'% len(test_files))
Using TensorFlow backend.
There are 133 total dog categories.
There are 8351 total dog images.

There are 6680 training dog images.
There are 835 validation dog images.
There are 836 test dog images.

Import Human Dataset

In the code cell below, we import a dataset of human images, where the file paths are stored in the numpy array human_files.

In [2]:
import random
random.seed(8675309)

# load filenames in shuffled human dataset
human_files = np.array(glob("../../../data/lfw/*/*"))
random.shuffle(human_files)

# print statistics about the dataset
print('There are %d total human images.' % len(human_files))
There are 13233 total human images.

Step 1: Detect Humans

We use OpenCV's implementation of Haar feature-based cascade classifiers to detect human faces in images. OpenCV provides many pre-trained face detectors, stored as XML files on github. We have downloaded one of these detectors and stored it in the haarcascades directory.

In the next code cell, we demonstrate how to use this detector to find human faces in a sample image.

In [3]:
import cv2                
import matplotlib.pyplot as plt                        
%matplotlib inline                               

# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')

# load color (BGR) image
img = cv2.imread(human_files[5])
# convert BGR image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# find faces in image
faces = face_cascade.detectMultiScale(gray)

# print number of faces detected in the image
print('Number of faces detected:', len(faces))

# get bounding box for each detected face
for (x,y,w,h) in faces:
    # add bounding box to color image
    cv2.rectangle(img,(x,y),(x+w,y+h),(255,100,0),3)
    
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

# display the image, along with bounding box
plt.imshow(cv_rgb)
plt.show()
Number of faces detected: 1

Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The detectMultiScale function executes the classifier stored in face_cascade and takes the grayscale image as a parameter.

In the above code, faces is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as x and y) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as w and h) specify the width and height of the box.

Write a Human Face Detector

We can use this procedure to write a function that returns True if a human face is detected in an image and False otherwise. This function, aptly named face_detector, takes a string-valued file path to an image as input and appears in the code block below.

In [4]:
# returns "True" if face is detected in image stored at img_path
def face_detector(img_path):    '''
    INPUT:
    img_path - image path for the image that should be checked for a human face
    
    OUTPUT:
    returns "True" in case a human face was detected in the given image
    '''
    img = cv2.imread(img_path)
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray)
    return len(faces) > 0

(IMPLEMENTATION) Assess the Human Face Detector

Question 1: Use the code cell below to test the performance of the face_detector function.

  • What percentage of the first 100 images in human_files have a detected human face?
  • What percentage of the first 100 images in dog_files have a detected human face?

Ideally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays human_files_short and dog_files_short.

Answer:

100% of the human faces were correctly identified as human faces.

11% of the dog faces were also passed as human faces.

In [5]:
human_files_short = human_files[:100]
dog_files_short = train_files[:100]
# Do NOT modify the code above this line.

## TODO: Test the performance of the face_detector algorithm 
## on the images in human_files_short and dog_files_short.
truth_humans = []
for i in range(len(human_files_short)):
    truth_humans.append(face_detector(human_files_short[i]))

truth_humans
sum(1 for x in truth_humans if x)
Out[5]:
100
In [6]:
truth_dogs = []
for i in range(len(dog_files_short)):
    truth_dogs.append(face_detector(dog_files_short[i]))

truth_dogs[:15]
Out[6]:
[True,
 False,
 False,
 False,
 False,
 False,
 False,
 False,
 False,
 False,
 False,
 False,
 False,
 False,
 True]
In [7]:
sum(1 for x in truth_dogs if x)
Out[7]:
11

So, 100% of the human faces were correctly identified as human faces.

11% of the dog faces were also passed as human faces.

Question 2: This algorithmic choice necessitates that we communicate to the user that we accept human images only when they provide a clear view of a face (otherwise, we risk having unneccessarily frustrated users!). In your opinion, is this a reasonable expectation to pose on the user? If not, can you think of a way to detect humans in images that does not necessitate an image with a clearly presented face?

Answer: I think that it is a reasonable request to pose on users to let them know that they will get the best result out of this little experiment with a clear, front facing picture. Since most users will be very used to taking selfing all the time, they should be able to make this.

We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this optional task, report performance on each of the datasets.

In [8]:
## (Optional) TODO: Report the performance of another  
## face detection algorithm on the LFW dataset
### Feel free to use as many code cells as needed.

Step 2: Detect Dogs

In this section, we use a pre-trained ResNet-50 model to detect dogs in images. Our first line of code downloads the ResNet-50 model, along with weights that have been trained on ImageNet, a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of 1000 categories. Given an image, this pre-trained ResNet-50 model returns a prediction (derived from the available categories in ImageNet) for the object that is contained in the image.

In [9]:
from keras.applications.resnet50 import ResNet50

# define ResNet50 model
ResNet50_model = ResNet50(weights='imagenet')
Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.2/resnet50_weights_tf_dim_ordering_tf_kernels.h5
102858752/102853048 [==============================] - 1s 0us/step

Pre-process the Data

When using TensorFlow as backend, Keras CNNs require a 4D array (which we'll also refer to as a 4D tensor) as input, with shape

$$ (\text{nb_samples}, \text{rows}, \text{columns}, \text{channels}), $$

where nb_samples corresponds to the total number of images (or samples), and rows, columns, and channels correspond to the number of rows, columns, and channels for each image, respectively.

The path_to_tensor function below takes a string-valued file path to a color image as input and returns a 4D tensor suitable for supplying to a Keras CNN. The function first loads the image and resizes it to a square image that is $224 \times 224$ pixels. Next, the image is converted to an array, which is then resized to a 4D tensor. In this case, since we are working with color images, each image has three channels. Likewise, since we are processing a single image (or sample), the returned tensor will always have shape

$$ (1, 224, 224, 3). $$

The paths_to_tensor function takes a numpy array of string-valued image paths as input and returns a 4D tensor with shape

$$ (\text{nb_samples}, 224, 224, 3). $$

Here, nb_samples is the number of samples, or number of images, in the supplied array of image paths. It is best to think of nb_samples as the number of 3D tensors (where each 3D tensor corresponds to a different image) in your dataset!

In [10]:
from keras.preprocessing import image                  
from tqdm import tqdm

def path_to_tensor(img_path):
    '''
    INPUT:
    img_path - image path for the image that should be transformed into a 4D tensor
    
    OUTPUT:
    returns a 4D tensor of the image with shape (1,224,224,3)
    '''
    # loads RGB image as PIL.Image.Image type
    img = image.load_img(img_path, target_size=(224, 224))
    # convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3)
    x = image.img_to_array(img)
    # convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor
    return np.expand_dims(x, axis=0)

def paths_to_tensor(img_paths):
    '''
    INPUT:
    img_paths - list of image paths to be transformed into a 4D tensors
    
    OUTPUT:
    returns a list of 4D tensor of the images
    '''
    list_of_tensors = [path_to_tensor(img_path) for img_path in tqdm(img_paths)]
    return np.vstack(list_of_tensors)

Making Predictions with ResNet-50

Getting the 4D tensor ready for ResNet-50, and for any other pre-trained model in Keras, requires some additional processing. First, the RGB image is converted to BGR by reordering the channels. All pre-trained models have the additional normalization step that the mean pixel (expressed in RGB as $[103.939, 116.779, 123.68]$ and calculated from all pixels in all images in ImageNet) must be subtracted from every pixel in each image. This is implemented in the imported function preprocess_input. If you're curious, you can check the code for preprocess_input here.

Now that we have a way to format our image for supplying to ResNet-50, we are now ready to use the model to extract the predictions. This is accomplished with the predict method, which returns an array whose $i$-th entry is the model's predicted probability that the image belongs to the $i$-th ImageNet category. This is implemented in the ResNet50_predict_labels function below.

By taking the argmax of the predicted probability vector, we obtain an integer corresponding to the model's predicted object class, which we can identify with an object category through the use of this dictionary.

In [11]:
from keras.applications.resnet50 import preprocess_input, decode_predictions

def ResNet50_predict_labels(img_path):
    '''
    INPUT:
    img_path - image path 
    
    OUTPUT:
    returns a prediction vector for the image located at img_path
    '''
    img = preprocess_input(path_to_tensor(img_path))
    return np.argmax(ResNet50_model.predict(img))

Write a Dog Detector

While looking at the dictionary, you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from 'Chihuahua' to 'Mexican hairless'. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained ResNet-50 model, we need only check if the ResNet50_predict_labels function above returns a value between 151 and 268 (inclusive).

We use these ideas to complete the dog_detector function below, which returns True if a dog is detected in an image (and False if not).

In [12]:
### returns "True" if a dog is detected in the image stored at img_path
def dog_detector(img_path):
    '''
    INPUT:
    img_path - image path 
    
    OUTPUT:
    returns "True" in case a dog was predicted by ResNet50 and "False" otherwise
    '''
    prediction = ResNet50_predict_labels(img_path)
    return ((prediction <= 268) & (prediction >= 151)) 

(IMPLEMENTATION) Assess the Dog Detector

Question 3: Use the code cell below to test the performance of your dog_detector function.

  • What percentage of the images in human_files_short have a detected dog?
  • What percentage of the images in dog_files_short have a detected dog?

Answer:

0% of the human faces were identified as dogs.

100% of the dogs have been identified as dogs.

This is an really good result, as every picture was classified correctly.

In [13]:
### TODO: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.
human_files_short = human_files[:100]
dog_files_short = train_files[:100]
# Do NOT modify the code above this line.

## TODO: Test the performance of the face_detector algorithm 
## on the images in human_files_short and dog_files_short.
truth_humans = []
for i in range(len(human_files_short)):
    truth_humans.append(dog_detector(human_files_short[i]))

truth_humans[:15]
Out[13]:
[False,
 False,
 False,
 False,
 False,
 False,
 False,
 False,
 False,
 False,
 False,
 False,
 False,
 False,
 False]
In [14]:
sum(1 for x in truth_humans if x)
Out[14]:
0
In [15]:
truth_dogs = []
for i in range(len(dog_files_short)):
    truth_dogs.append(dog_detector(dog_files_short[i]))

truth_dogs[:15]
Out[15]:
[True,
 True,
 True,
 True,
 True,
 True,
 True,
 True,
 True,
 True,
 True,
 True,
 True,
 True,
 True]
In [16]:
sum(1 for x in truth_dogs if x)
Out[16]:
100

So, 0% of the human faces were identified as dogs.

And: 100% of the dogs have been identified as dogs.

This is an extremely good result, as every picture was classified correctly.


Step 3: Create a CNN to Classify Dog Breeds (from Scratch)

Now that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN from scratch (so, you can't use transfer learning yet!), and you must attain a test accuracy of at least 1%. In Step 5 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.

Be careful with adding too many trainable layers! More parameters means longer training, which means you are more likely to need a GPU to accelerate the training process. Thankfully, Keras provides a handy estimate of the time that each epoch is likely to take; you can extrapolate this estimate to figure out how long it will take for your algorithm to train.

We mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that even a human would have great difficulty in distinguishing between a Brittany and a Welsh Springer Spaniel.

Brittany Welsh Springer Spaniel

It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels).

Curly-Coated Retriever American Water Spaniel

Likewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed.

Yellow Labrador Chocolate Labrador Black Labrador

We also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%.

Remember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun!

Pre-process the Data

We rescale the images by dividing every pixel in every image by 255.

In [17]:
from PIL import ImageFile                            
ImageFile.LOAD_TRUNCATED_IMAGES = True                 

# pre-process the data for Keras
train_tensors = paths_to_tensor(train_files).astype('float32')/255
valid_tensors = paths_to_tensor(valid_files).astype('float32')/255
test_tensors = paths_to_tensor(test_files).astype('float32')/255
100%|██████████| 6680/6680 [01:17<00:00, 86.71it/s] 
100%|██████████| 835/835 [00:08<00:00, 99.17it/s] 
100%|██████████| 836/836 [00:08<00:00, 92.91it/s] 

(IMPLEMENTATION) Model Architecture

Create a CNN to classify dog breed. At the end of your code cell block, summarize the layers of your model by executing the line:

    model.summary()

We have imported some Python modules to get you started, but feel free to import as many modules as you need. If you end up getting stuck, here's a hint that specifies a model that trains relatively fast on CPU and attains >1% test accuracy in 5 epochs:

Sample CNN

Question 4: Outline the steps you took to get to your final CNN architecture and your reasoning at each step. If you chose to use the hinted architecture above, describe why you think that CNN architecture should work well for the image classification task.

Answer: Convolutional Networks are often used for image recognition. They rely heavily on feature reduction / edge detection.  This helps a lot to reduce training times. In a more traditional, perceptron based Neural Network, the layers are usually "full" layers.  So the more sparse layers resulting from the concepts of convolutional networks, show two very positive effects:

  1. Great reduction of training times
  2. Less prone to overfitting

I added some additional Dropout-Layers, to further reduce tendencies of overfitting.

In [19]:
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D
from keras.layers import Dropout, Flatten, Dense
from keras.models import Sequential

model = Sequential()

### TODO: Define your architecture.
model.add(Conv2D(filters=16, kernel_size=2, padding='valid', activation='relu', input_shape=(224,224,3)))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Dropout(0.15))
model.add(Conv2D(32, kernel_size=(2, 2), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Dropout(0.1))
model.add(Conv2D(64, kernel_size=(2, 2), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(GlobalAveragePooling2D())
model.add(Dropout(0.2))
model.add(Dense(133, activation='softmax'))

model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_4 (Conv2D)            (None, 223, 223, 16)      208       
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 111, 111, 16)      0         
_________________________________________________________________
dropout_4 (Dropout)          (None, 111, 111, 16)      0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 110, 110, 32)      2080      
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 55, 55, 32)        0         
_________________________________________________________________
dropout_5 (Dropout)          (None, 55, 55, 32)        0         
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 54, 54, 64)        8256      
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 27, 27, 64)        0         
_________________________________________________________________
global_average_pooling2d_2 ( (None, 64)                0         
_________________________________________________________________
dropout_6 (Dropout)          (None, 64)                0         
_________________________________________________________________
dense_2 (Dense)              (None, 133)               8645      
=================================================================
Total params: 19,189
Trainable params: 19,189
Non-trainable params: 0
_________________________________________________________________

Compile the Model

In [20]:
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])

(IMPLEMENTATION) Train the Model

Train your model in the code cell below. Use model checkpointing to save the model that attains the best validation loss.

You are welcome to augment the training data, but this is not a requirement.

DON'T START - kinda slow and not needed later

In [22]:
from keras.callbacks import ModelCheckpoint  

### TODO: specify the number of epochs that you would like to use to train the model.
epochs = 1000

### Do NOT modify the code below this line.
checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.from_scratch.hdf5', 
                               verbose=1, save_best_only=True)

model.fit(train_tensors, train_targets, 
          validation_data=(valid_tensors, valid_targets),
          epochs=epochs, batch_size=20, callbacks=[checkpointer], verbose=1)
Train on 6680 samples, validate on 835 samples
Epoch 1/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.4339 - acc: 0.0548Epoch 00001: val_loss improved from inf to 4.54765, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.4338 - acc: 0.0546 - val_loss: 4.5477 - val_acc: 0.0443
Epoch 2/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.4237 - acc: 0.0569Epoch 00002: val_loss improved from 4.54765 to 4.53622, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.4230 - acc: 0.0569 - val_loss: 4.5362 - val_acc: 0.0431
Epoch 3/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.4173 - acc: 0.0571Epoch 00003: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.4172 - acc: 0.0570 - val_loss: 4.5382 - val_acc: 0.0455
Epoch 4/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.4047 - acc: 0.0577Epoch 00004: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.4053 - acc: 0.0575 - val_loss: 4.5486 - val_acc: 0.0407
Epoch 5/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.3929 - acc: 0.0619Epoch 00005: val_loss improved from 4.53622 to 4.51613, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.3932 - acc: 0.0620 - val_loss: 4.5161 - val_acc: 0.0443
Epoch 6/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.3884 - acc: 0.0578Epoch 00006: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.3884 - acc: 0.0581 - val_loss: 4.5323 - val_acc: 0.0443
Epoch 7/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.3864 - acc: 0.0592Epoch 00007: val_loss improved from 4.51613 to 4.47902, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.3861 - acc: 0.0590 - val_loss: 4.4790 - val_acc: 0.0575
Epoch 8/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.3813 - acc: 0.0599Epoch 00008: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.3818 - acc: 0.0599 - val_loss: 4.5000 - val_acc: 0.0527
Epoch 9/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.3688 - acc: 0.0592Epoch 00009: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.3691 - acc: 0.0594 - val_loss: 4.4821 - val_acc: 0.0395
Epoch 10/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.3635 - acc: 0.0635Epoch 00010: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.3616 - acc: 0.0638 - val_loss: 4.4870 - val_acc: 0.0479
Epoch 11/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.3517 - acc: 0.0652Epoch 00011: val_loss improved from 4.47902 to 4.45567, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.3514 - acc: 0.0651 - val_loss: 4.4557 - val_acc: 0.0455
Epoch 12/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.3509 - acc: 0.0640Epoch 00012: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.3508 - acc: 0.0642 - val_loss: 4.5129 - val_acc: 0.0455
Epoch 13/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.3368 - acc: 0.0658Epoch 00013: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.3370 - acc: 0.0656 - val_loss: 4.5103 - val_acc: 0.0455
Epoch 14/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.3315 - acc: 0.0667Epoch 00014: val_loss improved from 4.45567 to 4.45206, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.3305 - acc: 0.0668 - val_loss: 4.4521 - val_acc: 0.0539
Epoch 15/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.3201 - acc: 0.0698Epoch 00015: val_loss improved from 4.45206 to 4.42729, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.3207 - acc: 0.0696 - val_loss: 4.4273 - val_acc: 0.0503
Epoch 16/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.3166 - acc: 0.0712Epoch 00016: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.3169 - acc: 0.0710 - val_loss: 4.4439 - val_acc: 0.0551
Epoch 17/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.3065 - acc: 0.0689Epoch 00017: val_loss improved from 4.42729 to 4.42561, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.3062 - acc: 0.0690 - val_loss: 4.4256 - val_acc: 0.0563
Epoch 18/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.2939 - acc: 0.0685Epoch 00018: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.2940 - acc: 0.0684 - val_loss: 4.4406 - val_acc: 0.0539
Epoch 19/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.2877 - acc: 0.0733Epoch 00019: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.2889 - acc: 0.0732 - val_loss: 4.4583 - val_acc: 0.0563
Epoch 20/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.2808 - acc: 0.0718Epoch 00020: val_loss improved from 4.42561 to 4.39772, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.2805 - acc: 0.0716 - val_loss: 4.3977 - val_acc: 0.0575
Epoch 21/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.2736 - acc: 0.0715Epoch 00021: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.2734 - acc: 0.0713 - val_loss: 4.4091 - val_acc: 0.0599
Epoch 22/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.2619 - acc: 0.0776Epoch 00022: val_loss improved from 4.39772 to 4.38431, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.2622 - acc: 0.0778 - val_loss: 4.3843 - val_acc: 0.0563
Epoch 23/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.2593 - acc: 0.0731Epoch 00023: val_loss improved from 4.38431 to 4.36751, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.2602 - acc: 0.0734 - val_loss: 4.3675 - val_acc: 0.0563
Epoch 24/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.2517 - acc: 0.0757Epoch 00024: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.2514 - acc: 0.0756 - val_loss: 4.4205 - val_acc: 0.0551
Epoch 25/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.2397 - acc: 0.0743Epoch 00025: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.2388 - acc: 0.0746 - val_loss: 4.3948 - val_acc: 0.0527
Epoch 26/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.2408 - acc: 0.0824Epoch 00026: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.2411 - acc: 0.0825 - val_loss: 4.3881 - val_acc: 0.0611
Epoch 27/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.2285 - acc: 0.0779Epoch 00027: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.2286 - acc: 0.0781 - val_loss: 4.4147 - val_acc: 0.0647
Epoch 28/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.2163 - acc: 0.0836Epoch 00028: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.2165 - acc: 0.0834 - val_loss: 4.3923 - val_acc: 0.0659
Epoch 29/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.2270 - acc: 0.0812Epoch 00029: val_loss improved from 4.36751 to 4.35954, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.2274 - acc: 0.0813 - val_loss: 4.3595 - val_acc: 0.0611
Epoch 30/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.2073 - acc: 0.0830Epoch 00030: val_loss improved from 4.35954 to 4.34938, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.2078 - acc: 0.0828 - val_loss: 4.3494 - val_acc: 0.0659
Epoch 31/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.2033 - acc: 0.0823Epoch 00031: val_loss improved from 4.34938 to 4.34661, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.2029 - acc: 0.0825 - val_loss: 4.3466 - val_acc: 0.0635
Epoch 32/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.2011 - acc: 0.0827Epoch 00032: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.2012 - acc: 0.0828 - val_loss: 4.3603 - val_acc: 0.0743
Epoch 33/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.1959 - acc: 0.0835Epoch 00033: val_loss improved from 4.34661 to 4.33526, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.1947 - acc: 0.0837 - val_loss: 4.3353 - val_acc: 0.0623
Epoch 34/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.1935 - acc: 0.0815Epoch 00034: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.1925 - acc: 0.0817 - val_loss: 4.3444 - val_acc: 0.0683
Epoch 35/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.1740 - acc: 0.0835Epoch 00035: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.1752 - acc: 0.0835 - val_loss: 4.3407 - val_acc: 0.0683
Epoch 36/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.1831 - acc: 0.0856Epoch 00036: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.1840 - acc: 0.0855 - val_loss: 4.3540 - val_acc: 0.0611
Epoch 37/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.1732 - acc: 0.0856Epoch 00037: val_loss improved from 4.33526 to 4.33277, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.1738 - acc: 0.0855 - val_loss: 4.3328 - val_acc: 0.0611
Epoch 38/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.1637 - acc: 0.0880Epoch 00038: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.1652 - acc: 0.0877 - val_loss: 4.3911 - val_acc: 0.0635
Epoch 39/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.1617 - acc: 0.0881Epoch 00039: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.1626 - acc: 0.0880 - val_loss: 4.3419 - val_acc: 0.0599
Epoch 40/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.1683 - acc: 0.0884Epoch 00040: val_loss improved from 4.33277 to 4.31488, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.1662 - acc: 0.0885 - val_loss: 4.3149 - val_acc: 0.0659
Epoch 41/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.1535 - acc: 0.0902Epoch 00041: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.1528 - acc: 0.0906 - val_loss: 4.3342 - val_acc: 0.0695
Epoch 42/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.1417 - acc: 0.0931Epoch 00042: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.1431 - acc: 0.0930 - val_loss: 4.3222 - val_acc: 0.0659
Epoch 43/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.1373 - acc: 0.0883Epoch 00043: val_loss improved from 4.31488 to 4.29854, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.1368 - acc: 0.0885 - val_loss: 4.2985 - val_acc: 0.0731
Epoch 44/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.1413 - acc: 0.0902Epoch 00044: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.1436 - acc: 0.0900 - val_loss: 4.3550 - val_acc: 0.0671
Epoch 45/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.1229 - acc: 0.0923Epoch 00045: val_loss improved from 4.29854 to 4.27864, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.1231 - acc: 0.0922 - val_loss: 4.2786 - val_acc: 0.0731
Epoch 46/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.1224 - acc: 0.0955Epoch 00046: val_loss improved from 4.27864 to 4.27176, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.1217 - acc: 0.0955 - val_loss: 4.2718 - val_acc: 0.0659
Epoch 47/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.1103 - acc: 0.0911Epoch 00047: val_loss improved from 4.27176 to 4.25422, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.1100 - acc: 0.0912 - val_loss: 4.2542 - val_acc: 0.0838
Epoch 48/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.1070 - acc: 0.0896Epoch 00048: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.1075 - acc: 0.0895 - val_loss: 4.2911 - val_acc: 0.0754
Epoch 49/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.1086 - acc: 0.0946Epoch 00049: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.1083 - acc: 0.0948 - val_loss: 4.2778 - val_acc: 0.0814
Epoch 50/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0998 - acc: 0.0983Epoch 00050: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.1005 - acc: 0.0987 - val_loss: 4.2663 - val_acc: 0.0790
Epoch 51/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0896 - acc: 0.1000Epoch 00051: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0896 - acc: 0.0999 - val_loss: 4.2568 - val_acc: 0.0731
Epoch 52/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0910 - acc: 0.0935Epoch 00052: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0915 - acc: 0.0937 - val_loss: 4.2904 - val_acc: 0.0743
Epoch 53/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0795 - acc: 0.0976Epoch 00053: val_loss improved from 4.25422 to 4.22253, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0792 - acc: 0.0978 - val_loss: 4.2225 - val_acc: 0.0790
Epoch 54/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0623 - acc: 0.0952Epoch 00054: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0647 - acc: 0.0949 - val_loss: 4.2735 - val_acc: 0.0719
Epoch 55/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0659 - acc: 0.0971Epoch 00055: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0678 - acc: 0.0972 - val_loss: 4.2321 - val_acc: 0.0910
Epoch 56/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0623 - acc: 0.1005Epoch 00056: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0621 - acc: 0.1007 - val_loss: 4.2692 - val_acc: 0.0910
Epoch 57/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0531 - acc: 0.1032Epoch 00057: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0541 - acc: 0.1030 - val_loss: 4.2620 - val_acc: 0.0695
Epoch 58/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0543 - acc: 0.0967Epoch 00058: val_loss improved from 4.22253 to 4.21428, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0548 - acc: 0.0966 - val_loss: 4.2143 - val_acc: 0.0922
Epoch 59/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0649 - acc: 0.1048Epoch 00059: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0661 - acc: 0.1048 - val_loss: 4.2167 - val_acc: 0.0790
Epoch 60/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0513 - acc: 0.0988Epoch 00060: val_loss improved from 4.21428 to 4.20722, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0521 - acc: 0.0985 - val_loss: 4.2072 - val_acc: 0.0910
Epoch 61/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0482 - acc: 0.1024Epoch 00061: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0491 - acc: 0.1022 - val_loss: 4.2218 - val_acc: 0.0814
Epoch 62/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0459 - acc: 0.1057Epoch 00062: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0463 - acc: 0.1060 - val_loss: 4.2226 - val_acc: 0.0766
Epoch 63/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0344 - acc: 0.1080Epoch 00063: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0350 - acc: 0.1079 - val_loss: 4.2361 - val_acc: 0.0814
Epoch 64/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0305 - acc: 0.1099Epoch 00064: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0311 - acc: 0.1099 - val_loss: 4.2436 - val_acc: 0.0731
Epoch 65/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0324 - acc: 0.1056Epoch 00065: val_loss improved from 4.20722 to 4.20098, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0329 - acc: 0.1055 - val_loss: 4.2010 - val_acc: 0.0946
Epoch 66/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0247 - acc: 0.1065Epoch 00066: val_loss improved from 4.20098 to 4.19448, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0240 - acc: 0.1066 - val_loss: 4.1945 - val_acc: 0.0910
Epoch 67/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0142 - acc: 0.1077Epoch 00067: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0134 - acc: 0.1076 - val_loss: 4.2200 - val_acc: 0.0814
Epoch 68/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0230 - acc: 0.1050Epoch 00068: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0232 - acc: 0.1055 - val_loss: 4.2062 - val_acc: 0.0934
Epoch 69/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0085 - acc: 0.1096Epoch 00069: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0079 - acc: 0.1097 - val_loss: 4.2350 - val_acc: 0.0946
Epoch 70/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0119 - acc: 0.1135Epoch 00070: val_loss improved from 4.19448 to 4.17033, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0121 - acc: 0.1133 - val_loss: 4.1703 - val_acc: 0.0922
Epoch 71/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9974 - acc: 0.1084Epoch 00071: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9976 - acc: 0.1085 - val_loss: 4.2779 - val_acc: 0.0826
Epoch 72/1000
6660/6680 [============================>.] - ETA: 0s - loss: 4.0020 - acc: 0.1114Epoch 00072: val_loss improved from 4.17033 to 4.15992, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 4.0008 - acc: 0.1117 - val_loss: 4.1599 - val_acc: 0.0946
Epoch 73/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9902 - acc: 0.1128Epoch 00073: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9907 - acc: 0.1126 - val_loss: 4.1688 - val_acc: 0.0982
Epoch 74/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9859 - acc: 0.1086Epoch 00074: val_loss improved from 4.15992 to 4.15916, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9862 - acc: 0.1090 - val_loss: 4.1592 - val_acc: 0.0898
Epoch 75/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9607 - acc: 0.1152Epoch 00075: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9614 - acc: 0.1148 - val_loss: 4.2009 - val_acc: 0.0970
Epoch 76/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9840 - acc: 0.1143Epoch 00076: val_loss improved from 4.15916 to 4.14960, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9839 - acc: 0.1142 - val_loss: 4.1496 - val_acc: 0.1090
Epoch 77/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9729 - acc: 0.1104Epoch 00077: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9728 - acc: 0.1106 - val_loss: 4.1787 - val_acc: 0.0994
Epoch 78/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9754 - acc: 0.1161Epoch 00078: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9747 - acc: 0.1163 - val_loss: 4.1563 - val_acc: 0.1078
Epoch 79/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9669 - acc: 0.1111Epoch 00079: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9671 - acc: 0.1112 - val_loss: 4.2597 - val_acc: 0.0874
Epoch 80/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9675 - acc: 0.1114Epoch 00080: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9671 - acc: 0.1117 - val_loss: 4.1510 - val_acc: 0.1066
Epoch 81/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9582 - acc: 0.1158Epoch 00081: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9577 - acc: 0.1157 - val_loss: 4.1594 - val_acc: 0.1066
Epoch 82/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9593 - acc: 0.1158Epoch 00082: val_loss improved from 4.14960 to 4.14481, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9585 - acc: 0.1159 - val_loss: 4.1448 - val_acc: 0.1018
Epoch 83/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9390 - acc: 0.1170Epoch 00083: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9396 - acc: 0.1171 - val_loss: 4.1746 - val_acc: 0.0946
Epoch 84/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9474 - acc: 0.1189Epoch 00084: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9487 - acc: 0.1187 - val_loss: 4.1639 - val_acc: 0.0886
Epoch 85/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9396 - acc: 0.1188Epoch 00085: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9402 - acc: 0.1186 - val_loss: 4.1692 - val_acc: 0.0994
Epoch 86/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9394 - acc: 0.1177Epoch 00086: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9397 - acc: 0.1178 - val_loss: 4.1481 - val_acc: 0.0934
Epoch 87/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9372 - acc: 0.1222Epoch 00087: val_loss improved from 4.14481 to 4.13810, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9380 - acc: 0.1222 - val_loss: 4.1381 - val_acc: 0.1042
Epoch 88/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9398 - acc: 0.1171Epoch 00088: val_loss improved from 4.13810 to 4.12700, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9381 - acc: 0.1172 - val_loss: 4.1270 - val_acc: 0.1102
Epoch 89/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9379 - acc: 0.1191Epoch 00089: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9390 - acc: 0.1190 - val_loss: 4.1737 - val_acc: 0.0934
Epoch 90/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9341 - acc: 0.1155Epoch 00090: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9347 - acc: 0.1153 - val_loss: 4.1634 - val_acc: 0.0934
Epoch 91/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9204 - acc: 0.1180Epoch 00091: val_loss improved from 4.12700 to 4.10029, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9208 - acc: 0.1180 - val_loss: 4.1003 - val_acc: 0.1090
Epoch 92/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9150 - acc: 0.1236Epoch 00092: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9137 - acc: 0.1240 - val_loss: 4.1119 - val_acc: 0.1054
Epoch 93/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9158 - acc: 0.1180Epoch 00093: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9153 - acc: 0.1178 - val_loss: 4.1086 - val_acc: 0.1150
Epoch 94/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9163 - acc: 0.1231Epoch 00094: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9182 - acc: 0.1228 - val_loss: 4.2151 - val_acc: 0.0850
Epoch 95/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9105 - acc: 0.1242Epoch 00095: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9107 - acc: 0.1241 - val_loss: 4.1420 - val_acc: 0.1030
Epoch 96/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8925 - acc: 0.1243Epoch 00096: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8918 - acc: 0.1241 - val_loss: 4.1017 - val_acc: 0.1138
Epoch 97/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9076 - acc: 0.1245Epoch 00097: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9072 - acc: 0.1243 - val_loss: 4.1629 - val_acc: 0.1090
Epoch 98/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.9023 - acc: 0.1227Epoch 00098: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.9016 - acc: 0.1225 - val_loss: 4.1356 - val_acc: 0.1054
Epoch 99/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8938 - acc: 0.1291Epoch 00099: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8938 - acc: 0.1292 - val_loss: 4.1659 - val_acc: 0.0934
Epoch 100/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8873 - acc: 0.1255Epoch 00100: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8884 - acc: 0.1256 - val_loss: 4.1319 - val_acc: 0.0934
Epoch 101/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8809 - acc: 0.1243Epoch 00101: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8828 - acc: 0.1243 - val_loss: 4.1466 - val_acc: 0.1042
Epoch 102/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8861 - acc: 0.1260Epoch 00102: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8863 - acc: 0.1260 - val_loss: 4.1193 - val_acc: 0.1018
Epoch 103/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8872 - acc: 0.1264Epoch 00103: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8878 - acc: 0.1262 - val_loss: 4.1041 - val_acc: 0.1102
Epoch 104/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8811 - acc: 0.1230Epoch 00104: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8795 - acc: 0.1229 - val_loss: 4.1309 - val_acc: 0.1030
Epoch 105/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8708 - acc: 0.1291Epoch 00105: val_loss improved from 4.10029 to 4.08870, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8709 - acc: 0.1289 - val_loss: 4.0887 - val_acc: 0.1042
Epoch 106/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8726 - acc: 0.1267Epoch 00106: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8734 - acc: 0.1265 - val_loss: 4.1348 - val_acc: 0.1078
Epoch 107/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8619 - acc: 0.1335Epoch 00107: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8631 - acc: 0.1332 - val_loss: 4.2129 - val_acc: 0.0934
Epoch 108/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8650 - acc: 0.1297Epoch 00108: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8659 - acc: 0.1298 - val_loss: 4.1504 - val_acc: 0.0994
Epoch 109/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8537 - acc: 0.1261Epoch 00109: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8538 - acc: 0.1260 - val_loss: 4.1318 - val_acc: 0.0970
Epoch 110/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8559 - acc: 0.1336Epoch 00110: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8550 - acc: 0.1338 - val_loss: 4.1136 - val_acc: 0.1054
Epoch 111/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8483 - acc: 0.1312Epoch 00111: val_loss improved from 4.08870 to 4.08403, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8482 - acc: 0.1311 - val_loss: 4.0840 - val_acc: 0.1006
Epoch 112/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8505 - acc: 0.1297Epoch 00112: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8503 - acc: 0.1296 - val_loss: 4.1115 - val_acc: 0.0946
Epoch 113/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8361 - acc: 0.1296Epoch 00113: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8380 - acc: 0.1295 - val_loss: 4.1528 - val_acc: 0.0934
Epoch 114/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8488 - acc: 0.1323Epoch 00114: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8479 - acc: 0.1326 - val_loss: 4.1259 - val_acc: 0.1006
Epoch 115/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8391 - acc: 0.1330Epoch 00115: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8391 - acc: 0.1329 - val_loss: 4.1156 - val_acc: 0.0994
Epoch 116/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8515 - acc: 0.1353Epoch 00116: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8504 - acc: 0.1352 - val_loss: 4.1283 - val_acc: 0.0994
Epoch 117/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8217 - acc: 0.1351Epoch 00117: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8212 - acc: 0.1352 - val_loss: 4.0910 - val_acc: 0.0994
Epoch 118/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8300 - acc: 0.1276Epoch 00118: val_loss improved from 4.08403 to 4.05209, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8290 - acc: 0.1277 - val_loss: 4.0521 - val_acc: 0.1162
Epoch 119/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8222 - acc: 0.1377Epoch 00119: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8225 - acc: 0.1374 - val_loss: 4.1266 - val_acc: 0.1054
Epoch 120/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8239 - acc: 0.1381Epoch 00120: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8245 - acc: 0.1379 - val_loss: 4.0966 - val_acc: 0.1030
Epoch 121/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8143 - acc: 0.1423Epoch 00121: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8152 - acc: 0.1425 - val_loss: 4.1105 - val_acc: 0.0982
Epoch 122/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8079 - acc: 0.1422Epoch 00122: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8075 - acc: 0.1422 - val_loss: 4.1539 - val_acc: 0.0826
Epoch 123/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8105 - acc: 0.1407Epoch 00123: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8113 - acc: 0.1410 - val_loss: 4.1719 - val_acc: 0.0970
Epoch 124/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8117 - acc: 0.1417Epoch 00124: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8104 - acc: 0.1418 - val_loss: 4.1391 - val_acc: 0.0946
Epoch 125/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8163 - acc: 0.1404Epoch 00125: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8161 - acc: 0.1403 - val_loss: 4.1578 - val_acc: 0.1018
Epoch 126/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8068 - acc: 0.1332Epoch 00126: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8063 - acc: 0.1331 - val_loss: 4.1165 - val_acc: 0.1090
Epoch 127/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8076 - acc: 0.1375Epoch 00127: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8077 - acc: 0.1374 - val_loss: 4.0675 - val_acc: 0.1006
Epoch 128/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8101 - acc: 0.1372Epoch 00128: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8116 - acc: 0.1373 - val_loss: 4.0645 - val_acc: 0.1030
Epoch 129/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.8009 - acc: 0.1405Epoch 00129: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.8003 - acc: 0.1409 - val_loss: 4.0823 - val_acc: 0.1006
Epoch 130/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7955 - acc: 0.1410Epoch 00130: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7959 - acc: 0.1409 - val_loss: 4.1851 - val_acc: 0.0910
Epoch 131/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7973 - acc: 0.1431Epoch 00131: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7972 - acc: 0.1433 - val_loss: 4.1462 - val_acc: 0.0910
Epoch 132/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7807 - acc: 0.1438Epoch 00132: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7811 - acc: 0.1440 - val_loss: 4.0570 - val_acc: 0.1090
Epoch 133/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7758 - acc: 0.1431Epoch 00133: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7767 - acc: 0.1428 - val_loss: 4.0745 - val_acc: 0.0994
Epoch 134/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7801 - acc: 0.1413Epoch 00134: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7811 - acc: 0.1413 - val_loss: 4.1616 - val_acc: 0.0838
Epoch 135/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7792 - acc: 0.1508Epoch 00135: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7787 - acc: 0.1504 - val_loss: 4.1719 - val_acc: 0.0946
Epoch 136/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7909 - acc: 0.1441Epoch 00136: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7906 - acc: 0.1439 - val_loss: 4.1348 - val_acc: 0.0862
Epoch 137/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7718 - acc: 0.1404Epoch 00137: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7727 - acc: 0.1403 - val_loss: 4.1495 - val_acc: 0.0886
Epoch 138/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7731 - acc: 0.1476Epoch 00138: val_loss improved from 4.05209 to 4.04978, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7719 - acc: 0.1476 - val_loss: 4.0498 - val_acc: 0.1102
Epoch 139/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7564 - acc: 0.1441Epoch 00139: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7572 - acc: 0.1439 - val_loss: 4.0769 - val_acc: 0.1054
Epoch 140/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7800 - acc: 0.1437Epoch 00140: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7792 - acc: 0.1437 - val_loss: 4.0993 - val_acc: 0.0922
Epoch 141/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7773 - acc: 0.1432Epoch 00141: val_loss improved from 4.04978 to 4.04873, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7771 - acc: 0.1433 - val_loss: 4.0487 - val_acc: 0.1126
Epoch 142/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7700 - acc: 0.1426Epoch 00142: val_loss improved from 4.04873 to 4.02742, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7685 - acc: 0.1428 - val_loss: 4.0274 - val_acc: 0.1126
Epoch 143/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7636 - acc: 0.1455Epoch 00143: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7636 - acc: 0.1454 - val_loss: 4.1064 - val_acc: 0.0994
Epoch 144/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7620 - acc: 0.1471Epoch 00144: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7609 - acc: 0.1473 - val_loss: 4.1258 - val_acc: 0.1042
Epoch 145/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7659 - acc: 0.1441Epoch 00145: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7653 - acc: 0.1440 - val_loss: 4.0734 - val_acc: 0.0970
Epoch 146/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7540 - acc: 0.1453Epoch 00146: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7537 - acc: 0.1455 - val_loss: 4.0594 - val_acc: 0.1162
Epoch 147/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7471 - acc: 0.1473Epoch 00147: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7469 - acc: 0.1475 - val_loss: 4.1063 - val_acc: 0.1042
Epoch 148/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7512 - acc: 0.1479Epoch 00148: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7525 - acc: 0.1476 - val_loss: 4.1418 - val_acc: 0.1018
Epoch 149/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7530 - acc: 0.1483Epoch 00149: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7540 - acc: 0.1481 - val_loss: 4.0843 - val_acc: 0.1018
Epoch 150/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7386 - acc: 0.1502Epoch 00150: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7394 - acc: 0.1499 - val_loss: 4.0835 - val_acc: 0.1222
Epoch 151/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7440 - acc: 0.1489Epoch 00151: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7436 - acc: 0.1493 - val_loss: 4.0836 - val_acc: 0.1054
Epoch 152/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7334 - acc: 0.1500Epoch 00152: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7317 - acc: 0.1503 - val_loss: 4.0697 - val_acc: 0.1018
Epoch 153/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7344 - acc: 0.1506Epoch 00153: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7332 - acc: 0.1507 - val_loss: 4.0607 - val_acc: 0.1102
Epoch 154/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7440 - acc: 0.1520Epoch 00154: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7442 - acc: 0.1521 - val_loss: 4.0509 - val_acc: 0.1090
Epoch 155/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7466 - acc: 0.1473Epoch 00155: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7458 - acc: 0.1476 - val_loss: 4.0598 - val_acc: 0.1018
Epoch 156/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7361 - acc: 0.1473Epoch 00156: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7362 - acc: 0.1472 - val_loss: 4.1511 - val_acc: 0.0994
Epoch 157/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7287 - acc: 0.1505Epoch 00157: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7283 - acc: 0.1504 - val_loss: 4.0282 - val_acc: 0.1162
Epoch 158/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7296 - acc: 0.1523Epoch 00158: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7306 - acc: 0.1519 - val_loss: 4.1339 - val_acc: 0.1006
Epoch 159/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7240 - acc: 0.1473Epoch 00159: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7238 - acc: 0.1476 - val_loss: 4.0449 - val_acc: 0.1126
Epoch 160/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7093 - acc: 0.1557Epoch 00160: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7096 - acc: 0.1555 - val_loss: 4.1077 - val_acc: 0.0922
Epoch 161/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7239 - acc: 0.1536Epoch 00161: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7263 - acc: 0.1534 - val_loss: 4.0436 - val_acc: 0.1042
Epoch 162/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7231 - acc: 0.1535Epoch 00162: val_loss improved from 4.02742 to 4.02210, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7228 - acc: 0.1534 - val_loss: 4.0221 - val_acc: 0.1222
Epoch 163/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7201 - acc: 0.1518Epoch 00163: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7213 - acc: 0.1519 - val_loss: 4.0605 - val_acc: 0.1066
Epoch 164/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7104 - acc: 0.1542Epoch 00164: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7089 - acc: 0.1543 - val_loss: 4.0394 - val_acc: 0.1114
Epoch 165/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7117 - acc: 0.1571Epoch 00165: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7114 - acc: 0.1567 - val_loss: 4.1289 - val_acc: 0.1030
Epoch 166/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7084 - acc: 0.1544Epoch 00166: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7090 - acc: 0.1546 - val_loss: 4.0284 - val_acc: 0.1126
Epoch 167/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7029 - acc: 0.1580Epoch 00173: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7023 - acc: 0.1582 - val_loss: 4.0570 - val_acc: 0.1030
Epoch 174/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7032 - acc: 0.1560Epoch 00174: val_loss improved from 4.00746 to 3.99649, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7029 - acc: 0.1560 - val_loss: 3.9965 - val_acc: 0.1114
Epoch 175/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6968 - acc: 0.1554Epoch 00175: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6958 - acc: 0.1555 - val_loss: 4.0886 - val_acc: 0.1054
Epoch 176/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6951 - acc: 0.1592Epoch 00176: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6945 - acc: 0.1591 - val_loss: 4.0007 - val_acc: 0.1174
Epoch 177/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6992 - acc: 0.1517Epoch 00177: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6985 - acc: 0.1516 - val_loss: 4.0398 - val_acc: 0.1018
Epoch 178/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6945 - acc: 0.1557Epoch 00178: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6949 - acc: 0.1557 - val_loss: 4.1071 - val_acc: 0.1018
Epoch 179/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6807 - acc: 0.1544Epoch 00179: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6789 - acc: 0.1543 - val_loss: 4.0310 - val_acc: 0.1066
Epoch 180/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6819 - acc: 0.1547Epoch 00180: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6819 - acc: 0.1543 - val_loss: 4.0267 - val_acc: 0.1066
Epoch 181/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.7055 - acc: 0.1542Epoch 00181: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.7041 - acc: 0.1546 - val_loss: 4.1401 - val_acc: 0.1042
Epoch 182/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6768 - acc: 0.1530Epoch 00182: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6771 - acc: 0.1530 - val_loss: 4.0928 - val_acc: 0.1150
Epoch 183/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6833 - acc: 0.1580Epoch 00183: val_loss improved from 3.99649 to 3.97683, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6825 - acc: 0.1585 - val_loss: 3.9768 - val_acc: 0.1246
Epoch 184/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6635 - acc: 0.1584Epoch 00184: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6619 - acc: 0.1585 - val_loss: 4.1313 - val_acc: 0.1090
Epoch 185/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6714 - acc: 0.1593Epoch 00185: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6705 - acc: 0.1597 - val_loss: 4.0542 - val_acc: 0.1150
Epoch 186/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6774 - acc: 0.1583Epoch 00186: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6772 - acc: 0.1585 - val_loss: 3.9957 - val_acc: 0.1150
Epoch 187/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6724 - acc: 0.1566Epoch 00187: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6734 - acc: 0.1566 - val_loss: 4.1327 - val_acc: 0.1030
Epoch 188/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6711 - acc: 0.1538Epoch 00188: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6706 - acc: 0.1537 - val_loss: 4.0340 - val_acc: 0.1198
Epoch 189/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6635 - acc: 0.1584Epoch 00189: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6621 - acc: 0.1582 - val_loss: 4.0503 - val_acc: 0.1234
Epoch 190/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6548 - acc: 0.1569Epoch 00190: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6543 - acc: 0.1569 - val_loss: 4.0244 - val_acc: 0.1150
Epoch 191/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6762 - acc: 0.1586Epoch 00191: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6761 - acc: 0.1587 - val_loss: 4.0485 - val_acc: 0.1102
Epoch 192/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6783 - acc: 0.1604Epoch 00192: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6782 - acc: 0.1603 - val_loss: 4.0809 - val_acc: 0.1102
Epoch 193/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6738 - acc: 0.1608Epoch 00193: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6728 - acc: 0.1611 - val_loss: 4.0171 - val_acc: 0.1102
Epoch 194/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6565 - acc: 0.1602Epoch 00194: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6556 - acc: 0.1603 - val_loss: 4.0193 - val_acc: 0.1126
Epoch 195/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6498 - acc: 0.1647Epoch 00195: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6513 - acc: 0.1645 - val_loss: 4.0616 - val_acc: 0.1054
Epoch 196/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6717 - acc: 0.1560Epoch 00196: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6706 - acc: 0.1563 - val_loss: 4.0429 - val_acc: 0.1114
Epoch 197/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6527 - acc: 0.1599Epoch 00197: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6527 - acc: 0.1600 - val_loss: 4.1901 - val_acc: 0.0886
Epoch 198/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6542 - acc: 0.1593Epoch 00198: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6536 - acc: 0.1591 - val_loss: 4.0623 - val_acc: 0.1126
Epoch 199/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6533 - acc: 0.1640Epoch 00199: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6528 - acc: 0.1636 - val_loss: 4.1223 - val_acc: 0.1054
Epoch 200/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6526 - acc: 0.1646Epoch 00200: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6516 - acc: 0.1644 - val_loss: 4.0450 - val_acc: 0.1078
Epoch 201/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6393 - acc: 0.1578Epoch 00201: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6388 - acc: 0.1581 - val_loss: 4.0992 - val_acc: 0.1042
Epoch 202/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6568 - acc: 0.1671Epoch 00202: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6572 - acc: 0.1666 - val_loss: 4.1529 - val_acc: 0.1042
Epoch 203/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6385 - acc: 0.1601Epoch 00203: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6385 - acc: 0.1599 - val_loss: 4.0166 - val_acc: 0.1126
Epoch 204/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6401 - acc: 0.1650Epoch 00204: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6403 - acc: 0.1648 - val_loss: 4.1405 - val_acc: 0.0970
Epoch 205/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6493 - acc: 0.1634Epoch 00205: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6493 - acc: 0.1635 - val_loss: 4.0949 - val_acc: 0.1042
Epoch 206/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6433 - acc: 0.1640Epoch 00206: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6415 - acc: 0.1644 - val_loss: 4.0354 - val_acc: 0.1030
Epoch 207/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6321 - acc: 0.1635Epoch 00207: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6325 - acc: 0.1632 - val_loss: 4.0807 - val_acc: 0.1054
Epoch 208/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6462 - acc: 0.1632Epoch 00208: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6469 - acc: 0.1629 - val_loss: 4.0218 - val_acc: 0.1162
Epoch 209/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6182 - acc: 0.1710Epoch 00209: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6183 - acc: 0.1708 - val_loss: 4.1330 - val_acc: 0.0982
Epoch 210/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6359 - acc: 0.1650Epoch 00210: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6360 - acc: 0.1645 - val_loss: 4.0749 - val_acc: 0.1066
Epoch 211/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6290 - acc: 0.1649Epoch 00211: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6279 - acc: 0.1651 - val_loss: 4.1072 - val_acc: 0.1138
Epoch 212/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6168 - acc: 0.1689Epoch 00212: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6176 - acc: 0.1686 - val_loss: 4.0599 - val_acc: 0.1054
Epoch 213/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6326 - acc: 0.1620Epoch 00213: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6317 - acc: 0.1621 - val_loss: 4.0656 - val_acc: 0.0994
Epoch 214/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6274 - acc: 0.1665Epoch 00214: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6286 - acc: 0.1662 - val_loss: 4.0697 - val_acc: 0.1102
Epoch 215/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6237 - acc: 0.1691Epoch 00215: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6248 - acc: 0.1690 - val_loss: 4.0803 - val_acc: 0.0994
Epoch 216/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6310 - acc: 0.1622Epoch 00216: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6308 - acc: 0.1621 - val_loss: 4.0880 - val_acc: 0.1018
Epoch 217/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6307 - acc: 0.1694Epoch 00217: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6330 - acc: 0.1696 - val_loss: 4.1161 - val_acc: 0.1054
Epoch 218/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6364 - acc: 0.1700Epoch 00218: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6366 - acc: 0.1701 - val_loss: 4.0568 - val_acc: 0.1066
Epoch 219/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6091 - acc: 0.1704Epoch 00219: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6081 - acc: 0.1704 - val_loss: 4.0477 - val_acc: 0.1066
Epoch 220/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6223 - acc: 0.1694Epoch 00220: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6215 - acc: 0.1696 - val_loss: 3.9829 - val_acc: 0.1114
Epoch 221/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6100 - acc: 0.1724Epoch 00221: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6095 - acc: 0.1723 - val_loss: 4.0836 - val_acc: 0.1078
Epoch 222/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6247 - acc: 0.1722Epoch 00222: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6258 - acc: 0.1719 - val_loss: 4.0555 - val_acc: 0.1030
Epoch 223/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6122 - acc: 0.1673Epoch 00223: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6115 - acc: 0.1675 - val_loss: 4.0022 - val_acc: 0.1078
Epoch 224/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6140 - acc: 0.1649Epoch 00224: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6145 - acc: 0.1650 - val_loss: 4.0541 - val_acc: 0.1090
Epoch 225/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6102 - acc: 0.1758Epoch 00225: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6089 - acc: 0.1756 - val_loss: 3.9912 - val_acc: 0.1269
Epoch 226/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6131 - acc: 0.1683Epoch 00226: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6140 - acc: 0.1680 - val_loss: 4.1360 - val_acc: 0.0862
Epoch 227/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6213 - acc: 0.1698Epoch 00227: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6215 - acc: 0.1701 - val_loss: 4.1482 - val_acc: 0.1006
Epoch 228/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6063 - acc: 0.1694Epoch 00228: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6050 - acc: 0.1696 - val_loss: 4.0968 - val_acc: 0.1066
Epoch 229/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6034 - acc: 0.1718Epoch 00229: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6041 - acc: 0.1714 - val_loss: 4.0832 - val_acc: 0.0922
Epoch 230/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6018 - acc: 0.1757Epoch 00230: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6018 - acc: 0.1759 - val_loss: 4.1127 - val_acc: 0.0970
Epoch 231/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6094 - acc: 0.1694Epoch 00231: val_loss improved from 3.97683 to 3.96842, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6092 - acc: 0.1692 - val_loss: 3.9684 - val_acc: 0.1114
Epoch 232/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6114 - acc: 0.1724Epoch 00232: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6134 - acc: 0.1719 - val_loss: 4.1185 - val_acc: 0.0898
Epoch 233/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5998 - acc: 0.1767Epoch 00233: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5981 - acc: 0.1768 - val_loss: 4.0238 - val_acc: 0.1150
Epoch 234/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6043 - acc: 0.1673Epoch 00234: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6027 - acc: 0.1675 - val_loss: 4.0117 - val_acc: 0.1150
Epoch 235/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6003 - acc: 0.1674Epoch 00235: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6006 - acc: 0.1677 - val_loss: 4.0613 - val_acc: 0.1030
Epoch 236/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5879 - acc: 0.1773Epoch 00236: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5871 - acc: 0.1774 - val_loss: 4.0478 - val_acc: 0.1090
Epoch 237/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6074 - acc: 0.1715Epoch 00237: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6057 - acc: 0.1717 - val_loss: 4.1102 - val_acc: 0.1018
Epoch 238/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.6062 - acc: 0.1754Epoch 00238: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6070 - acc: 0.1750 - val_loss: 4.0906 - val_acc: 0.0958
Epoch 239/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5994 - acc: 0.1707Epoch 00239: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.6006 - acc: 0.1704 - val_loss: 4.0594 - val_acc: 0.1078
Epoch 240/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5761 - acc: 0.1722Epoch 00240: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5785 - acc: 0.1720 - val_loss: 4.1682 - val_acc: 0.0850
Epoch 241/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5922 - acc: 0.1692Epoch 00241: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5915 - acc: 0.1695 - val_loss: 4.0613 - val_acc: 0.1018
Epoch 242/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5877 - acc: 0.1760Epoch 00242: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5862 - acc: 0.1765 - val_loss: 4.0619 - val_acc: 0.0958
Epoch 243/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5894 - acc: 0.1730Epoch 00243: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5882 - acc: 0.1731 - val_loss: 4.0022 - val_acc: 0.1090
Epoch 244/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5827 - acc: 0.1745Epoch 00244: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5823 - acc: 0.1744 - val_loss: 4.0229 - val_acc: 0.1090
Epoch 245/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5961 - acc: 0.1752Epoch 00245: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5958 - acc: 0.1751 - val_loss: 4.0361 - val_acc: 0.1114
Epoch 246/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5781 - acc: 0.1748Epoch 00246: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5778 - acc: 0.1749 - val_loss: 4.0227 - val_acc: 0.1102
Epoch 247/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5835 - acc: 0.1739Epoch 00247: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5837 - acc: 0.1738 - val_loss: 4.0868 - val_acc: 0.1030
Epoch 248/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5794 - acc: 0.1692Epoch 00248: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5795 - acc: 0.1692 - val_loss: 4.2187 - val_acc: 0.0910
Epoch 249/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5850 - acc: 0.1739Epoch 00249: val_loss improved from 3.96842 to 3.96839, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5836 - acc: 0.1741 - val_loss: 3.9684 - val_acc: 0.1198
Epoch 250/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5661 - acc: 0.1727Epoch 00250: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5673 - acc: 0.1723 - val_loss: 4.1052 - val_acc: 0.1066
Epoch 251/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5896 - acc: 0.1758Epoch 00251: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5889 - acc: 0.1756 - val_loss: 4.0881 - val_acc: 0.1018
Epoch 252/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5714 - acc: 0.1713Epoch 00252: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5714 - acc: 0.1711 - val_loss: 4.0327 - val_acc: 0.1126
Epoch 253/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5686 - acc: 0.1779Epoch 00253: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5697 - acc: 0.1777 - val_loss: 4.1423 - val_acc: 0.0886
Epoch 254/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5735 - acc: 0.1721Epoch 00254: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5731 - acc: 0.1720 - val_loss: 4.1349 - val_acc: 0.1066
Epoch 255/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5501 - acc: 0.1748Epoch 00255: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5503 - acc: 0.1751 - val_loss: 4.0128 - val_acc: 0.1126
Epoch 256/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5731 - acc: 0.1677Epoch 00256: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5721 - acc: 0.1681 - val_loss: 4.0327 - val_acc: 0.1126
Epoch 257/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5622 - acc: 0.1746Epoch 00257: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5621 - acc: 0.1746 - val_loss: 3.9820 - val_acc: 0.1150
Epoch 258/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5676 - acc: 0.1772Epoch 00258: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5686 - acc: 0.1769 - val_loss: 4.0614 - val_acc: 0.1006
Epoch 259/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5641 - acc: 0.1748Epoch 00259: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5646 - acc: 0.1751 - val_loss: 4.0463 - val_acc: 0.1078
Epoch 260/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5653 - acc: 0.1787Epoch 00260: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5677 - acc: 0.1783 - val_loss: 4.1228 - val_acc: 0.0910
Epoch 261/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5757 - acc: 0.1767Epoch 00261: val_loss improved from 3.96839 to 3.95268, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5750 - acc: 0.1768 - val_loss: 3.9527 - val_acc: 0.1174
Epoch 262/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5508 - acc: 0.1775Epoch 00262: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5519 - acc: 0.1772 - val_loss: 4.0354 - val_acc: 0.1114
Epoch 263/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5607 - acc: 0.1770Epoch 00263: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5598 - acc: 0.1774 - val_loss: 4.0494 - val_acc: 0.1102
Epoch 264/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5807 - acc: 0.1730Epoch 00264: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5814 - acc: 0.1729 - val_loss: 4.0819 - val_acc: 0.1018
Epoch 265/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5519 - acc: 0.1841Epoch 00265: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5511 - acc: 0.1840 - val_loss: 4.0387 - val_acc: 0.1186
Epoch 266/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5763 - acc: 0.1809Epoch 00266: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5761 - acc: 0.1810 - val_loss: 4.0558 - val_acc: 0.1114
Epoch 267/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5706 - acc: 0.1778Epoch 00267: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5711 - acc: 0.1774 - val_loss: 3.9918 - val_acc: 0.1186
Epoch 268/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5543 - acc: 0.1746Epoch 00268: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5553 - acc: 0.1743 - val_loss: 4.0239 - val_acc: 0.1090
Epoch 269/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5526 - acc: 0.1856Epoch 00269: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5540 - acc: 0.1855 - val_loss: 4.0237 - val_acc: 0.1174
Epoch 270/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5407 - acc: 0.1761Epoch 00270: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5408 - acc: 0.1763 - val_loss: 4.0514 - val_acc: 0.1162
Epoch 271/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5564 - acc: 0.1767Epoch 00271: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5557 - acc: 0.1769 - val_loss: 3.9790 - val_acc: 0.1162
Epoch 272/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5532 - acc: 0.1724Epoch 00272: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5526 - acc: 0.1725 - val_loss: 4.0733 - val_acc: 0.1018
Epoch 273/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5546 - acc: 0.1776Epoch 00273: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5542 - acc: 0.1775 - val_loss: 4.0641 - val_acc: 0.1114
Epoch 274/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5593 - acc: 0.1757Epoch 00274: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5583 - acc: 0.1757 - val_loss: 4.0858 - val_acc: 0.1126
Epoch 275/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5727 - acc: 0.1704Epoch 00275: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5734 - acc: 0.1702 - val_loss: 4.0072 - val_acc: 0.1054
Epoch 276/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5567 - acc: 0.1769Epoch 00276: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5567 - acc: 0.1765 - val_loss: 4.2155 - val_acc: 0.0910
Epoch 277/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5605 - acc: 0.1790Epoch 00277: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5601 - acc: 0.1787 - val_loss: 4.1036 - val_acc: 0.1042
Epoch 278/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5476 - acc: 0.1784Epoch 00278: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5461 - acc: 0.1787 - val_loss: 4.1810 - val_acc: 0.0970
Epoch 279/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5684 - acc: 0.1758Epoch 00279: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5678 - acc: 0.1757 - val_loss: 4.0775 - val_acc: 0.1090
Epoch 280/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5506 - acc: 0.1815Epoch 00280: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5499 - acc: 0.1820 - val_loss: 4.1656 - val_acc: 0.0922
Epoch 281/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5538 - acc: 0.1733Epoch 00281: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5550 - acc: 0.1731 - val_loss: 4.0577 - val_acc: 0.0970
Epoch 282/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5587 - acc: 0.1782Epoch 00282: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5575 - acc: 0.1789 - val_loss: 4.0448 - val_acc: 0.1042
Epoch 283/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5482 - acc: 0.1820Epoch 00283: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5478 - acc: 0.1819 - val_loss: 4.0145 - val_acc: 0.1162
Epoch 284/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5429 - acc: 0.1815Epoch 00284: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5437 - acc: 0.1811 - val_loss: 4.0362 - val_acc: 0.1114
Epoch 285/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5565 - acc: 0.1818Epoch 00285: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5550 - acc: 0.1820 - val_loss: 4.0458 - val_acc: 0.1150
Epoch 286/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5364 - acc: 0.1869Epoch 00286: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5356 - acc: 0.1870 - val_loss: 3.9864 - val_acc: 0.1210
Epoch 287/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5499 - acc: 0.1739Epoch 00287: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5510 - acc: 0.1737 - val_loss: 4.0734 - val_acc: 0.1018
Epoch 288/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5466 - acc: 0.1742Epoch 00288: val_loss improved from 3.95268 to 3.94977, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5450 - acc: 0.1746 - val_loss: 3.9498 - val_acc: 0.1222
Epoch 289/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5557 - acc: 0.1851Epoch 00289: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5550 - acc: 0.1855 - val_loss: 4.1540 - val_acc: 0.0994
Epoch 290/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5472 - acc: 0.1856Epoch 00290: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5460 - acc: 0.1859 - val_loss: 4.0422 - val_acc: 0.1066
Epoch 291/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5369 - acc: 0.1839Epoch 00291: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5368 - acc: 0.1840 - val_loss: 4.1464 - val_acc: 0.0994
Epoch 292/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5232 - acc: 0.1854Epoch 00292: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5250 - acc: 0.1853 - val_loss: 4.0608 - val_acc: 0.0958
Epoch 293/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5563 - acc: 0.1827Epoch 00293: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5574 - acc: 0.1826 - val_loss: 4.1375 - val_acc: 0.0958
Epoch 294/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5432 - acc: 0.1724Epoch 00294: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5425 - acc: 0.1722 - val_loss: 4.0792 - val_acc: 0.0994
Epoch 295/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5462 - acc: 0.1776Epoch 00295: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5474 - acc: 0.1777 - val_loss: 4.1194 - val_acc: 0.0910
Epoch 296/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5553 - acc: 0.1790Epoch 00296: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5549 - acc: 0.1787 - val_loss: 4.1329 - val_acc: 0.1030
Epoch 297/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5316 - acc: 0.1796Epoch 00297: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5321 - acc: 0.1796 - val_loss: 4.2156 - val_acc: 0.0922
Epoch 298/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5575 - acc: 0.1752Epoch 00298: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5566 - acc: 0.1751 - val_loss: 4.0078 - val_acc: 0.1102
Epoch 299/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5434 - acc: 0.1788Epoch 00299: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5427 - acc: 0.1784 - val_loss: 3.9742 - val_acc: 0.1174
Epoch 300/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5433 - acc: 0.1770Epoch 00300: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5433 - acc: 0.1771 - val_loss: 4.1167 - val_acc: 0.0994
Epoch 301/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5303 - acc: 0.1779Epoch 00301: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5309 - acc: 0.1780 - val_loss: 4.1193 - val_acc: 0.0934
Epoch 302/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5312 - acc: 0.1733Epoch 00302: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5304 - acc: 0.1740 - val_loss: 4.1042 - val_acc: 0.0970
Epoch 303/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5433 - acc: 0.1776Epoch 00303: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5418 - acc: 0.1777 - val_loss: 4.1130 - val_acc: 0.0898
Epoch 304/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5271 - acc: 0.1893Epoch 00304: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5262 - acc: 0.1895 - val_loss: 4.1846 - val_acc: 0.0982
Epoch 305/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5382 - acc: 0.1833Epoch 00305: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5381 - acc: 0.1832 - val_loss: 4.0775 - val_acc: 0.1042
Epoch 306/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5234 - acc: 0.1812Epoch 00306: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5221 - acc: 0.1814 - val_loss: 4.1898 - val_acc: 0.0910
Epoch 307/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5276 - acc: 0.1866Epoch 00307: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5260 - acc: 0.1871 - val_loss: 4.0821 - val_acc: 0.0946
Epoch 308/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5285 - acc: 0.1847Epoch 00308: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5299 - acc: 0.1846 - val_loss: 4.2289 - val_acc: 0.0766
Epoch 309/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5329 - acc: 0.1761Epoch 00309: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5331 - acc: 0.1763 - val_loss: 4.0984 - val_acc: 0.0946
Epoch 310/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5186 - acc: 0.1817Epoch 00310: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5173 - acc: 0.1819 - val_loss: 4.0496 - val_acc: 0.1150
Epoch 311/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5267 - acc: 0.1844Epoch 00311: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5265 - acc: 0.1843 - val_loss: 4.1624 - val_acc: 0.0814
Epoch 312/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5121 - acc: 0.1857Epoch 00312: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5120 - acc: 0.1856 - val_loss: 4.0117 - val_acc: 0.1066
Epoch 313/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5101 - acc: 0.1766Epoch 00313: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5116 - acc: 0.1768 - val_loss: 4.1473 - val_acc: 0.0982
Epoch 314/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5177 - acc: 0.1866Epoch 00314: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5173 - acc: 0.1865 - val_loss: 4.0499 - val_acc: 0.1006
Epoch 315/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5209 - acc: 0.1875Epoch 00315: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5222 - acc: 0.1876 - val_loss: 4.1478 - val_acc: 0.0946
Epoch 316/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5263 - acc: 0.1779Epoch 00316: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5274 - acc: 0.1781 - val_loss: 4.0836 - val_acc: 0.1066
Epoch 317/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5248 - acc: 0.1835Epoch 00317: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5246 - acc: 0.1832 - val_loss: 3.9955 - val_acc: 0.1114
Epoch 318/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5113 - acc: 0.1910Epoch 00318: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5110 - acc: 0.1910 - val_loss: 4.1285 - val_acc: 0.1018
Epoch 319/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5329 - acc: 0.1814Epoch 00319: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5324 - acc: 0.1813 - val_loss: 4.0743 - val_acc: 0.0970
Epoch 320/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5271 - acc: 0.1847Epoch 00320: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5274 - acc: 0.1846 - val_loss: 4.0612 - val_acc: 0.1066
Epoch 321/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4955 - acc: 0.1899Epoch 00321: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4958 - acc: 0.1897 - val_loss: 4.1578 - val_acc: 0.0850
Epoch 322/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5224 - acc: 0.1862Epoch 00322: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5227 - acc: 0.1858 - val_loss: 4.0514 - val_acc: 0.1018
Epoch 323/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5236 - acc: 0.1856Epoch 00323: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5233 - acc: 0.1853 - val_loss: 4.0292 - val_acc: 0.1018
Epoch 324/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5129 - acc: 0.1875Epoch 00324: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5126 - acc: 0.1873 - val_loss: 3.9733 - val_acc: 0.1054
Epoch 325/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4955 - acc: 0.1871Epoch 00325: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4974 - acc: 0.1873 - val_loss: 4.1956 - val_acc: 0.0778
Epoch 326/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5106 - acc: 0.1859Epoch 00326: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5101 - acc: 0.1859 - val_loss: 4.0460 - val_acc: 0.1138
Epoch 327/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5200 - acc: 0.1847Epoch 00327: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5186 - acc: 0.1849 - val_loss: 4.0989 - val_acc: 0.1090
Epoch 328/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5353 - acc: 0.1788Epoch 00328: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5359 - acc: 0.1793 - val_loss: 4.1018 - val_acc: 0.0826
Epoch 329/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5108 - acc: 0.1841Epoch 00329: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5109 - acc: 0.1840 - val_loss: 4.1761 - val_acc: 0.0814
Epoch 330/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5129 - acc: 0.1838Epoch 00330: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5127 - acc: 0.1837 - val_loss: 4.2055 - val_acc: 0.0766
Epoch 331/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5087 - acc: 0.1826Epoch 00331: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5079 - acc: 0.1826 - val_loss: 4.0176 - val_acc: 0.1126
Epoch 332/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5095 - acc: 0.1857Epoch 00332: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5129 - acc: 0.1855 - val_loss: 4.1042 - val_acc: 0.0922
Epoch 333/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5051 - acc: 0.1895Epoch 00333: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5048 - acc: 0.1898 - val_loss: 4.0750 - val_acc: 0.0934
Epoch 334/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5126 - acc: 0.1845Epoch 00334: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5125 - acc: 0.1847 - val_loss: 4.0459 - val_acc: 0.1102
Epoch 335/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5246 - acc: 0.1895Epoch 00335: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5230 - acc: 0.1897 - val_loss: 4.0252 - val_acc: 0.1066
Epoch 336/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4994 - acc: 0.1881Epoch 00336: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5003 - acc: 0.1880 - val_loss: 4.1706 - val_acc: 0.0862
Epoch 337/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5010 - acc: 0.1805Epoch 00337: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5009 - acc: 0.1808 - val_loss: 4.0367 - val_acc: 0.0958
Epoch 338/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5079 - acc: 0.1802Epoch 00338: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5074 - acc: 0.1799 - val_loss: 4.0284 - val_acc: 0.1042
Epoch 339/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5069 - acc: 0.1818Epoch 00339: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5052 - acc: 0.1822 - val_loss: 4.0878 - val_acc: 0.0886
Epoch 340/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5118 - acc: 0.1821Epoch 00340: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5112 - acc: 0.1822 - val_loss: 4.0542 - val_acc: 0.0934
Epoch 341/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5122 - acc: 0.1856Epoch 00341: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5110 - acc: 0.1862 - val_loss: 4.1477 - val_acc: 0.0838
Epoch 342/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5078 - acc: 0.1853Epoch 00342: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5079 - acc: 0.1850 - val_loss: 4.0642 - val_acc: 0.0946
Epoch 343/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4927 - acc: 0.1851Epoch 00343: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4921 - acc: 0.1853 - val_loss: 4.1555 - val_acc: 0.0778
Epoch 344/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5090 - acc: 0.1857Epoch 00344: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5095 - acc: 0.1856 - val_loss: 4.1643 - val_acc: 0.0790
Epoch 345/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4894 - acc: 0.1902Epoch 00345: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4914 - acc: 0.1898 - val_loss: 4.1361 - val_acc: 0.0862
Epoch 346/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5114 - acc: 0.1883Epoch 00346: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5108 - acc: 0.1886 - val_loss: 3.9943 - val_acc: 0.0958
Epoch 347/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5125 - acc: 0.1809Epoch 00347: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5125 - acc: 0.1813 - val_loss: 4.0349 - val_acc: 0.0934
Epoch 348/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5060 - acc: 0.1905Epoch 00348: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5051 - acc: 0.1907 - val_loss: 4.0575 - val_acc: 0.0814
Epoch 349/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4993 - acc: 0.1851Epoch 00349: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4973 - acc: 0.1853 - val_loss: 4.1355 - val_acc: 0.0874
Epoch 350/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4762 - acc: 0.1875Epoch 00350: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4772 - acc: 0.1873 - val_loss: 4.0478 - val_acc: 0.0946
Epoch 351/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5070 - acc: 0.1878Epoch 00351: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5066 - acc: 0.1879 - val_loss: 4.0637 - val_acc: 0.0874
Epoch 352/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5043 - acc: 0.1895Epoch 00352: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5030 - acc: 0.1895 - val_loss: 4.0711 - val_acc: 0.0922
Epoch 353/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4952 - acc: 0.1914Epoch 00353: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4949 - acc: 0.1915 - val_loss: 4.0293 - val_acc: 0.0898
Epoch 354/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4968 - acc: 0.1892Epoch 00354: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4974 - acc: 0.1891 - val_loss: 4.1259 - val_acc: 0.0790
Epoch 355/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4989 - acc: 0.1835Epoch 00355: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4973 - acc: 0.1837 - val_loss: 4.0365 - val_acc: 0.0898
Epoch 356/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4830 - acc: 0.1908Epoch 00356: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4830 - acc: 0.1909 - val_loss: 4.1872 - val_acc: 0.0695
Epoch 357/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4851 - acc: 0.1856Epoch 00357: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4863 - acc: 0.1852 - val_loss: 4.0563 - val_acc: 0.0910
Epoch 358/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4833 - acc: 0.1856Epoch 00358: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4839 - acc: 0.1856 - val_loss: 4.0689 - val_acc: 0.0838
Epoch 359/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4990 - acc: 0.1848Epoch 00359: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5006 - acc: 0.1844 - val_loss: 4.0412 - val_acc: 0.0934
Epoch 360/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4880 - acc: 0.1892Epoch 00360: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4868 - acc: 0.1894 - val_loss: 4.1076 - val_acc: 0.0814
Epoch 361/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4892 - acc: 0.1875Epoch 00361: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4882 - acc: 0.1874 - val_loss: 4.1622 - val_acc: 0.0754
Epoch 362/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5011 - acc: 0.1871Epoch 00362: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5021 - acc: 0.1868 - val_loss: 4.2153 - val_acc: 0.0707
Epoch 363/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4620 - acc: 0.1943Epoch 00363: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4629 - acc: 0.1943 - val_loss: 4.1587 - val_acc: 0.0731
Epoch 364/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4716 - acc: 0.1895Epoch 00364: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4720 - acc: 0.1897 - val_loss: 4.1411 - val_acc: 0.0826
Epoch 365/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4859 - acc: 0.1850Epoch 00365: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4872 - acc: 0.1849 - val_loss: 4.1880 - val_acc: 0.0790
Epoch 366/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4707 - acc: 0.1899Epoch 00366: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4707 - acc: 0.1898 - val_loss: 4.1280 - val_acc: 0.0874
Epoch 367/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.5004 - acc: 0.1890Epoch 00367: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.5014 - acc: 0.1886 - val_loss: 4.1870 - val_acc: 0.0754
Epoch 368/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4711 - acc: 0.1881Epoch 00368: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4695 - acc: 0.1886 - val_loss: 4.0376 - val_acc: 0.0910
Epoch 369/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4732 - acc: 0.1914Epoch 00369: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4726 - acc: 0.1913 - val_loss: 4.0862 - val_acc: 0.0850
Epoch 370/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4745 - acc: 0.1971Epoch 00370: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4775 - acc: 0.1967 - val_loss: 4.2614 - val_acc: 0.0575
Epoch 371/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4828 - acc: 0.1899Epoch 00371: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4837 - acc: 0.1898 - val_loss: 4.1135 - val_acc: 0.0731
Epoch 372/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4735 - acc: 0.1923Epoch 00372: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4726 - acc: 0.1925 - val_loss: 4.1047 - val_acc: 0.0790
Epoch 373/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4577 - acc: 0.1926Epoch 00373: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4569 - acc: 0.1931 - val_loss: 4.0548 - val_acc: 0.0850
Epoch 374/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4754 - acc: 0.1935Epoch 00374: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4755 - acc: 0.1937 - val_loss: 4.0915 - val_acc: 0.0790
Epoch 375/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4860 - acc: 0.1887Epoch 00375: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4862 - acc: 0.1886 - val_loss: 4.0959 - val_acc: 0.0754
Epoch 376/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4912 - acc: 0.1908Epoch 00376: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4897 - acc: 0.1910 - val_loss: 4.2015 - val_acc: 0.0790
Epoch 377/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4752 - acc: 0.1931Epoch 00377: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4764 - acc: 0.1928 - val_loss: 4.2764 - val_acc: 0.0611
Epoch 378/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4787 - acc: 0.1856Epoch 00378: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4774 - acc: 0.1856 - val_loss: 4.1111 - val_acc: 0.0874
Epoch 379/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4695 - acc: 0.1917Epoch 00379: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4678 - acc: 0.1924 - val_loss: 4.0131 - val_acc: 0.0934
Epoch 380/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4854 - acc: 0.1875Epoch 00380: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4855 - acc: 0.1876 - val_loss: 4.1201 - val_acc: 0.0814
Epoch 381/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4962 - acc: 0.1788Epoch 00381: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4943 - acc: 0.1793 - val_loss: 4.0281 - val_acc: 0.1018
Epoch 382/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4779 - acc: 0.1935Epoch 00382: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4776 - acc: 0.1934 - val_loss: 4.0531 - val_acc: 0.0958
Epoch 383/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4853 - acc: 0.1871Epoch 00383: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4865 - acc: 0.1868 - val_loss: 4.0850 - val_acc: 0.0790
Epoch 384/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4645 - acc: 0.1932Epoch 00384: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4626 - acc: 0.1934 - val_loss: 4.0499 - val_acc: 0.0910
Epoch 385/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4675 - acc: 0.1949Epoch 00385: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4668 - acc: 0.1948 - val_loss: 4.1001 - val_acc: 0.0826
Epoch 386/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4697 - acc: 0.1913Epoch 00386: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4701 - acc: 0.1912 - val_loss: 4.1476 - val_acc: 0.0695
Epoch 387/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4871 - acc: 0.1947Epoch 00387: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4872 - acc: 0.1948 - val_loss: 4.1715 - val_acc: 0.0683
Epoch 388/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4705 - acc: 0.1916Epoch 00388: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4708 - acc: 0.1915 - val_loss: 4.0894 - val_acc: 0.0766
Epoch 389/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4671 - acc: 0.1926Epoch 00389: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4674 - acc: 0.1928 - val_loss: 4.1915 - val_acc: 0.0754
Epoch 390/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4658 - acc: 0.1902Epoch 00390: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4659 - acc: 0.1903 - val_loss: 4.1096 - val_acc: 0.0790
Epoch 391/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4565 - acc: 0.1904Epoch 00391: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4556 - acc: 0.1909 - val_loss: 4.0748 - val_acc: 0.0814
Epoch 392/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4699 - acc: 0.1839Epoch 00392: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4701 - acc: 0.1841 - val_loss: 4.1345 - val_acc: 0.0743
Epoch 393/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4731 - acc: 0.1940Epoch 00393: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4720 - acc: 0.1943 - val_loss: 4.0695 - val_acc: 0.0850
Epoch 394/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4586 - acc: 0.1925Epoch 00394: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4594 - acc: 0.1922 - val_loss: 4.2154 - val_acc: 0.0707
Epoch 395/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4724 - acc: 0.1860Epoch 00395: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4724 - acc: 0.1858 - val_loss: 4.1066 - val_acc: 0.0814
Epoch 396/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4647 - acc: 0.1917Epoch 00396: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4626 - acc: 0.1919 - val_loss: 4.0443 - val_acc: 0.0970
Epoch 397/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4639 - acc: 0.1925Epoch 00397: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4635 - acc: 0.1925 - val_loss: 4.2440 - val_acc: 0.0599
Epoch 398/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4615 - acc: 0.1959Epoch 00398: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4611 - acc: 0.1961 - val_loss: 4.0975 - val_acc: 0.0862
Epoch 399/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4528 - acc: 0.1955Epoch 00399: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4531 - acc: 0.1955 - val_loss: 4.2361 - val_acc: 0.0683
Epoch 400/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4581 - acc: 0.1935Epoch 00400: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4596 - acc: 0.1936 - val_loss: 4.2450 - val_acc: 0.0647
Epoch 401/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4479 - acc: 0.1913Epoch 00401: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4484 - acc: 0.1910 - val_loss: 4.0374 - val_acc: 0.0898
Epoch 402/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4713 - acc: 0.1899Epoch 00402: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4707 - acc: 0.1898 - val_loss: 4.0194 - val_acc: 0.0910
Epoch 403/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4517 - acc: 0.2017Epoch 00403: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4517 - acc: 0.2015 - val_loss: 4.0442 - val_acc: 0.0778
Epoch 404/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4526 - acc: 0.1901Epoch 00404: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4539 - acc: 0.1898 - val_loss: 4.2062 - val_acc: 0.0695
Epoch 405/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4498 - acc: 0.1892Epoch 00405: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4496 - acc: 0.1892 - val_loss: 4.1394 - val_acc: 0.0766
Epoch 406/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4594 - acc: 0.1911Epoch 00406: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4602 - acc: 0.1910 - val_loss: 4.0301 - val_acc: 0.0850
Epoch 407/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4524 - acc: 0.1953Epoch 00407: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4521 - acc: 0.1951 - val_loss: 4.1244 - val_acc: 0.0838
Epoch 408/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4474 - acc: 0.1890Epoch 00408: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4474 - acc: 0.1891 - val_loss: 4.0338 - val_acc: 0.1006
Epoch 409/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4561 - acc: 0.1986Epoch 00409: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4550 - acc: 0.1990 - val_loss: 4.1215 - val_acc: 0.0826
Epoch 410/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4771 - acc: 0.1907Epoch 00410: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4760 - acc: 0.1910 - val_loss: 4.0042 - val_acc: 0.0946
Epoch 411/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4540 - acc: 0.1920Epoch 00411: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4537 - acc: 0.1921 - val_loss: 4.1395 - val_acc: 0.0766
Epoch 412/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4657 - acc: 0.1889Epoch 00412: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4672 - acc: 0.1885 - val_loss: 4.1721 - val_acc: 0.0778
Epoch 413/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4473 - acc: 0.1935Epoch 00413: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4444 - acc: 0.1942 - val_loss: 4.0676 - val_acc: 0.0898
Epoch 414/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4709 - acc: 0.1880Epoch 00414: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4714 - acc: 0.1883 - val_loss: 4.2283 - val_acc: 0.0743
Epoch 415/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4650 - acc: 0.1886Epoch 00415: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4633 - acc: 0.1888 - val_loss: 4.1126 - val_acc: 0.0766
Epoch 416/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4384 - acc: 0.1965Epoch 00416: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4384 - acc: 0.1963 - val_loss: 4.0342 - val_acc: 0.0946
Epoch 417/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4436 - acc: 0.1968Epoch 00417: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4425 - acc: 0.1970 - val_loss: 4.0627 - val_acc: 0.0814
Epoch 418/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4534 - acc: 0.1908Epoch 00418: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4538 - acc: 0.1906 - val_loss: 4.1360 - val_acc: 0.0719
Epoch 419/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4464 - acc: 0.1892Epoch 00419: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4459 - acc: 0.1894 - val_loss: 4.0841 - val_acc: 0.0766
Epoch 420/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4366 - acc: 0.1902Epoch 00420: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4359 - acc: 0.1901 - val_loss: 4.1498 - val_acc: 0.0802
Epoch 421/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4419 - acc: 0.2033Epoch 00421: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4414 - acc: 0.2034 - val_loss: 4.2105 - val_acc: 0.0683
Epoch 422/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4524 - acc: 0.1970Epoch 00422: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4519 - acc: 0.1966 - val_loss: 4.1750 - val_acc: 0.0743
Epoch 423/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4301 - acc: 0.2039Epoch 00423: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4312 - acc: 0.2034 - val_loss: 4.1697 - val_acc: 0.0683
Epoch 424/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4407 - acc: 0.1988Epoch 00424: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4407 - acc: 0.1991 - val_loss: 4.1450 - val_acc: 0.0814
Epoch 425/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4405 - acc: 0.1932Epoch 00425: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4410 - acc: 0.1933 - val_loss: 4.2090 - val_acc: 0.0707
Epoch 426/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4335 - acc: 0.2014Epoch 00426: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4341 - acc: 0.2012 - val_loss: 4.0968 - val_acc: 0.0898
Epoch 427/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4508 - acc: 0.1922Epoch 00427: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4518 - acc: 0.1921 - val_loss: 4.2303 - val_acc: 0.0719
Epoch 428/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4435 - acc: 0.1998Epoch 00428: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4427 - acc: 0.2003 - val_loss: 4.0825 - val_acc: 0.0862
Epoch 429/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4395 - acc: 0.1958Epoch 00429: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4405 - acc: 0.1957 - val_loss: 4.1187 - val_acc: 0.0731
Epoch 430/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4531 - acc: 0.1976Epoch 00430: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4539 - acc: 0.1973 - val_loss: 4.0484 - val_acc: 0.0778
Epoch 431/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4495 - acc: 0.1922Epoch 00431: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4504 - acc: 0.1924 - val_loss: 4.1301 - val_acc: 0.0731
Epoch 432/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4406 - acc: 0.1973Epoch 00432: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4401 - acc: 0.1978 - val_loss: 4.1924 - val_acc: 0.0587
Epoch 433/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4409 - acc: 0.1929Epoch 00433: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4397 - acc: 0.1928 - val_loss: 4.1329 - val_acc: 0.0754
Epoch 434/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4396 - acc: 0.1872Epoch 00434: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4422 - acc: 0.1871 - val_loss: 4.2269 - val_acc: 0.0671
Epoch 435/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4517 - acc: 0.1956Epoch 00435: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4527 - acc: 0.1954 - val_loss: 4.2710 - val_acc: 0.0611
Epoch 436/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4355 - acc: 0.1988Epoch 00436: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4341 - acc: 0.1991 - val_loss: 3.9921 - val_acc: 0.1006
Epoch 437/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4425 - acc: 0.1970Epoch 00437: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4442 - acc: 0.1969 - val_loss: 4.1947 - val_acc: 0.0671
Epoch 438/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4385 - acc: 0.1946Epoch 00438: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4369 - acc: 0.1943 - val_loss: 4.0742 - val_acc: 0.0838
Epoch 439/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4449 - acc: 0.1898Epoch 00439: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4444 - acc: 0.1900 - val_loss: 4.0948 - val_acc: 0.0790
Epoch 440/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4411 - acc: 0.1964Epoch 00440: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4406 - acc: 0.1967 - val_loss: 4.1421 - val_acc: 0.0719
Epoch 441/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4292 - acc: 0.1980Epoch 00441: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4288 - acc: 0.1982 - val_loss: 4.2224 - val_acc: 0.0635
Epoch 442/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4224 - acc: 0.1973Epoch 00442: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4239 - acc: 0.1967 - val_loss: 4.1153 - val_acc: 0.0683
Epoch 443/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4309 - acc: 0.1925Epoch 00443: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4307 - acc: 0.1922 - val_loss: 4.0784 - val_acc: 0.0826
Epoch 444/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4311 - acc: 0.1949Epoch 00444: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4302 - acc: 0.1948 - val_loss: 4.0556 - val_acc: 0.0982
Epoch 445/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4347 - acc: 0.1898Epoch 00445: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4347 - acc: 0.1897 - val_loss: 4.0813 - val_acc: 0.0778
Epoch 446/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4354 - acc: 0.1977Epoch 00446: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4333 - acc: 0.1982 - val_loss: 4.0010 - val_acc: 0.0946
Epoch 447/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4244 - acc: 0.1995Epoch 00447: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4251 - acc: 0.1993 - val_loss: 4.1231 - val_acc: 0.0766
Epoch 448/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4286 - acc: 0.2044Epoch 00448: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4296 - acc: 0.2043 - val_loss: 4.1355 - val_acc: 0.0754
Epoch 449/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4161 - acc: 0.2041Epoch 00449: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4159 - acc: 0.2045 - val_loss: 4.1042 - val_acc: 0.0814
Epoch 450/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4150 - acc: 0.2003Epoch 00450: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4145 - acc: 0.2000 - val_loss: 4.0091 - val_acc: 0.0970
Epoch 451/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4193 - acc: 0.2011Epoch 00451: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4202 - acc: 0.2007 - val_loss: 4.1831 - val_acc: 0.0707
Epoch 452/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4365 - acc: 0.1941Epoch 00452: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4362 - acc: 0.1942 - val_loss: 4.1011 - val_acc: 0.0850
Epoch 453/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4175 - acc: 0.1944Epoch 00453: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4178 - acc: 0.1945 - val_loss: 4.1460 - val_acc: 0.0826
Epoch 454/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4316 - acc: 0.1949Epoch 00454: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4341 - acc: 0.1945 - val_loss: 4.2709 - val_acc: 0.0611
Epoch 455/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4212 - acc: 0.1968Epoch 00455: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4243 - acc: 0.1966 - val_loss: 4.1614 - val_acc: 0.0743
Epoch 456/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4124 - acc: 0.1950Epoch 00456: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4119 - acc: 0.1951 - val_loss: 4.0818 - val_acc: 0.0826
Epoch 457/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4350 - acc: 0.1947Epoch 00457: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4334 - acc: 0.1946 - val_loss: 4.1434 - val_acc: 0.0719
Epoch 458/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4062 - acc: 0.1943Epoch 00458: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4076 - acc: 0.1942 - val_loss: 4.1328 - val_acc: 0.0731
Epoch 459/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4138 - acc: 0.2041Epoch 00459: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4130 - acc: 0.2039 - val_loss: 4.1046 - val_acc: 0.0790
Epoch 460/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4236 - acc: 0.1992Epoch 00460: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4254 - acc: 0.1988 - val_loss: 4.2559 - val_acc: 0.0647
Epoch 461/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4173 - acc: 0.1989Epoch 00461: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4168 - acc: 0.1988 - val_loss: 4.1753 - val_acc: 0.0778
Epoch 462/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4064 - acc: 0.1971Epoch 00462: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4071 - acc: 0.1972 - val_loss: 4.0685 - val_acc: 0.0802
Epoch 463/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4366 - acc: 0.1958Epoch 00463: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4348 - acc: 0.1961 - val_loss: 4.0992 - val_acc: 0.0814
Epoch 464/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4345 - acc: 0.1997Epoch 00464: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4339 - acc: 0.1993 - val_loss: 4.1704 - val_acc: 0.0731
Epoch 465/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4342 - acc: 0.1955Epoch 00465: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4356 - acc: 0.1949 - val_loss: 4.1585 - val_acc: 0.0743
Epoch 466/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4103 - acc: 0.1988Epoch 00466: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4093 - acc: 0.1988 - val_loss: 4.0911 - val_acc: 0.0743
Epoch 467/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4225 - acc: 0.1994Epoch 00467: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4220 - acc: 0.1993 - val_loss: 4.2121 - val_acc: 0.0731
Epoch 468/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4167 - acc: 0.2009Epoch 00468: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4155 - acc: 0.2010 - val_loss: 4.0224 - val_acc: 0.0910
Epoch 469/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4109 - acc: 0.1986Epoch 00469: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4113 - acc: 0.1985 - val_loss: 4.1602 - val_acc: 0.0814
Epoch 470/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4198 - acc: 0.2029Epoch 00470: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4182 - acc: 0.2034 - val_loss: 4.1377 - val_acc: 0.0802
Epoch 471/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4104 - acc: 0.1911Epoch 00471: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4103 - acc: 0.1910 - val_loss: 4.2226 - val_acc: 0.0695
Epoch 472/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4094 - acc: 0.2003Epoch 00472: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4076 - acc: 0.2004 - val_loss: 4.0612 - val_acc: 0.0886
Epoch 473/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3972 - acc: 0.2033Epoch 00473: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3961 - acc: 0.2033 - val_loss: 4.1163 - val_acc: 0.0778
Epoch 474/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4099 - acc: 0.1979Epoch 00474: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4103 - acc: 0.1981 - val_loss: 4.0588 - val_acc: 0.0826
Epoch 475/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3997 - acc: 0.1989Epoch 00475: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3999 - acc: 0.1988 - val_loss: 4.2027 - val_acc: 0.0659
Epoch 476/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4218 - acc: 0.1971Epoch 00476: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4221 - acc: 0.1975 - val_loss: 4.0845 - val_acc: 0.0874
Epoch 477/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4134 - acc: 0.1989Epoch 00477: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4135 - acc: 0.1988 - val_loss: 4.1572 - val_acc: 0.0683
Epoch 478/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4172 - acc: 0.2030Epoch 00478: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4175 - acc: 0.2028 - val_loss: 4.1330 - val_acc: 0.0707
Epoch 479/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4095 - acc: 0.2021Epoch 00479: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4098 - acc: 0.2018 - val_loss: 4.1550 - val_acc: 0.0731
Epoch 480/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4080 - acc: 0.2042Epoch 00480: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4086 - acc: 0.2043 - val_loss: 4.0607 - val_acc: 0.0826
Epoch 481/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4156 - acc: 0.1976Epoch 00481: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4144 - acc: 0.1981 - val_loss: 4.0350 - val_acc: 0.0946
Epoch 482/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4001 - acc: 0.1977Epoch 00482: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4006 - acc: 0.1975 - val_loss: 4.1332 - val_acc: 0.0707
Epoch 483/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4218 - acc: 0.1952Epoch 00483: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4217 - acc: 0.1954 - val_loss: 4.1646 - val_acc: 0.0695
Epoch 484/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4067 - acc: 0.2017Epoch 00484: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4057 - acc: 0.2016 - val_loss: 4.1896 - val_acc: 0.0719
Epoch 485/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3934 - acc: 0.2057Epoch 00485: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3924 - acc: 0.2058 - val_loss: 4.0456 - val_acc: 0.0731
Epoch 486/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4091 - acc: 0.2032Epoch 00486: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4088 - acc: 0.2034 - val_loss: 4.1548 - val_acc: 0.0731
Epoch 487/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4108 - acc: 0.2009Epoch 00487: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4103 - acc: 0.2009 - val_loss: 4.0966 - val_acc: 0.0838
Epoch 488/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3986 - acc: 0.2032Epoch 00488: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4001 - acc: 0.2030 - val_loss: 4.1342 - val_acc: 0.0695
Epoch 489/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4043 - acc: 0.2029Epoch 00489: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4059 - acc: 0.2025 - val_loss: 4.2327 - val_acc: 0.0695
Epoch 490/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4040 - acc: 0.2030Epoch 00490: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4046 - acc: 0.2030 - val_loss: 4.1742 - val_acc: 0.0731
Epoch 491/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4053 - acc: 0.1949Epoch 00491: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4066 - acc: 0.1945 - val_loss: 4.2107 - val_acc: 0.0647
Epoch 492/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4187 - acc: 0.1974Epoch 00492: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4167 - acc: 0.1981 - val_loss: 4.0646 - val_acc: 0.0886
Epoch 493/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3971 - acc: 0.1992Epoch 00493: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3972 - acc: 0.1994 - val_loss: 4.1496 - val_acc: 0.0731
Epoch 494/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3983 - acc: 0.2030Epoch 00494: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3977 - acc: 0.2034 - val_loss: 4.1048 - val_acc: 0.0814
Epoch 495/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4057 - acc: 0.2108Epoch 00495: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4037 - acc: 0.2111 - val_loss: 4.0952 - val_acc: 0.0862
Epoch 496/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4153 - acc: 0.1956Epoch 00496: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4149 - acc: 0.1957 - val_loss: 4.1552 - val_acc: 0.0731
Epoch 497/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4025 - acc: 0.2006Epoch 00497: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4016 - acc: 0.2007 - val_loss: 4.1413 - val_acc: 0.0743
Epoch 498/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3910 - acc: 0.2089Epoch 00498: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3911 - acc: 0.2087 - val_loss: 4.1710 - val_acc: 0.0731
Epoch 499/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3921 - acc: 0.2053Epoch 00499: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3931 - acc: 0.2049 - val_loss: 4.2207 - val_acc: 0.0683
Epoch 500/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3959 - acc: 0.2041Epoch 00500: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3950 - acc: 0.2040 - val_loss: 4.0555 - val_acc: 0.0826
Epoch 501/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3881 - acc: 0.2089Epoch 00501: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3873 - acc: 0.2090 - val_loss: 3.9798 - val_acc: 0.0922
Epoch 502/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4042 - acc: 0.2056Epoch 00502: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4040 - acc: 0.2052 - val_loss: 4.2162 - val_acc: 0.0719
Epoch 503/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3884 - acc: 0.2051Epoch 00503: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3904 - acc: 0.2046 - val_loss: 4.2811 - val_acc: 0.0635
Epoch 504/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3928 - acc: 0.2081Epoch 00504: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3926 - acc: 0.2079 - val_loss: 4.2234 - val_acc: 0.0671
Epoch 505/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3961 - acc: 0.2074Epoch 00505: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3961 - acc: 0.2075 - val_loss: 4.2665 - val_acc: 0.0683
Epoch 506/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3827 - acc: 0.2012Epoch 00506: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3819 - acc: 0.2013 - val_loss: 4.0647 - val_acc: 0.0778
Epoch 507/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4027 - acc: 0.2044Epoch 00507: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4010 - acc: 0.2040 - val_loss: 4.0812 - val_acc: 0.0778
Epoch 508/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4001 - acc: 0.2036Epoch 00508: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3987 - acc: 0.2039 - val_loss: 4.1155 - val_acc: 0.0778
Epoch 509/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3966 - acc: 0.2072Epoch 00509: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3967 - acc: 0.2067 - val_loss: 4.1280 - val_acc: 0.0754
Epoch 510/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3871 - acc: 0.2030Epoch 00510: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3889 - acc: 0.2028 - val_loss: 4.2811 - val_acc: 0.0659
Epoch 511/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3739 - acc: 0.2000Epoch 00511: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3733 - acc: 0.2004 - val_loss: 4.2449 - val_acc: 0.0790
Epoch 512/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3991 - acc: 0.2015Epoch 00512: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3975 - acc: 0.2019 - val_loss: 4.1404 - val_acc: 0.0838
Epoch 513/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4189 - acc: 0.2008Epoch 00513: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4194 - acc: 0.2007 - val_loss: 4.2889 - val_acc: 0.0635
Epoch 514/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3971 - acc: 0.1980Epoch 00514: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3981 - acc: 0.1979 - val_loss: 4.1669 - val_acc: 0.0683
Epoch 515/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3953 - acc: 0.1991Epoch 00515: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3954 - acc: 0.1993 - val_loss: 4.1491 - val_acc: 0.0743
Epoch 516/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3905 - acc: 0.2048Epoch 00516: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3900 - acc: 0.2048 - val_loss: 4.2030 - val_acc: 0.0802
Epoch 517/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3979 - acc: 0.1982Epoch 00517: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3974 - acc: 0.1985 - val_loss: 4.1311 - val_acc: 0.0707
Epoch 518/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3808 - acc: 0.2002Epoch 00518: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3803 - acc: 0.1999 - val_loss: 4.1436 - val_acc: 0.0778
Epoch 519/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3872 - acc: 0.2021Epoch 00519: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3861 - acc: 0.2024 - val_loss: 4.0526 - val_acc: 0.0946
Epoch 520/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3839 - acc: 0.2054Epoch 00520: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3838 - acc: 0.2052 - val_loss: 4.2000 - val_acc: 0.0659
Epoch 521/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3809 - acc: 0.2065Epoch 00521: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3800 - acc: 0.2063 - val_loss: 4.1486 - val_acc: 0.0743
Epoch 522/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3769 - acc: 0.2086Epoch 00522: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3770 - acc: 0.2084 - val_loss: 4.2570 - val_acc: 0.0683
Epoch 523/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3851 - acc: 0.2024Epoch 00523: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3832 - acc: 0.2030 - val_loss: 4.1201 - val_acc: 0.0814
Epoch 524/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3910 - acc: 0.2029Epoch 00524: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3889 - acc: 0.2031 - val_loss: 4.0818 - val_acc: 0.0874
Epoch 525/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3838 - acc: 0.2030Epoch 00525: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3826 - acc: 0.2033 - val_loss: 4.1125 - val_acc: 0.0790
Epoch 526/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3908 - acc: 0.2027Epoch 00526: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3912 - acc: 0.2024 - val_loss: 4.0776 - val_acc: 0.0814
Epoch 527/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3777 - acc: 0.2155Epoch 00527: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3777 - acc: 0.2154 - val_loss: 4.1550 - val_acc: 0.0802
Epoch 528/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3999 - acc: 0.1976Epoch 00528: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4000 - acc: 0.1976 - val_loss: 4.2647 - val_acc: 0.0731
Epoch 529/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3985 - acc: 0.1965Epoch 00529: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3982 - acc: 0.1967 - val_loss: 4.2433 - val_acc: 0.0659
Epoch 530/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4047 - acc: 0.2021Epoch 00530: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4045 - acc: 0.2024 - val_loss: 4.1980 - val_acc: 0.0719
Epoch 531/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3879 - acc: 0.2011Epoch 00531: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3867 - acc: 0.2012 - val_loss: 4.1154 - val_acc: 0.0802
Epoch 532/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3916 - acc: 0.2143Epoch 00532: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3923 - acc: 0.2144 - val_loss: 4.1340 - val_acc: 0.0910
Epoch 533/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3698 - acc: 0.2056Epoch 00533: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3693 - acc: 0.2057 - val_loss: 4.1220 - val_acc: 0.0814
Epoch 534/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3820 - acc: 0.1980Epoch 00534: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3837 - acc: 0.1978 - val_loss: 4.1574 - val_acc: 0.0707
Epoch 535/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3798 - acc: 0.2096Epoch 00535: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3803 - acc: 0.2096 - val_loss: 4.2856 - val_acc: 0.0671
Epoch 536/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3612 - acc: 0.2057Epoch 00536: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3593 - acc: 0.2063 - val_loss: 4.2046 - val_acc: 0.0731
Epoch 537/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3834 - acc: 0.2014Epoch 00537: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3842 - acc: 0.2012 - val_loss: 4.2155 - val_acc: 0.0766
Epoch 538/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3919 - acc: 0.2027Epoch 00538: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3913 - acc: 0.2028 - val_loss: 4.2099 - val_acc: 0.0778
Epoch 539/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3818 - acc: 0.2071Epoch 00539: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3842 - acc: 0.2066 - val_loss: 4.1621 - val_acc: 0.0695
Epoch 540/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3674 - acc: 0.2099Epoch 00540: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3685 - acc: 0.2099 - val_loss: 4.1972 - val_acc: 0.0802
Epoch 541/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3704 - acc: 0.1986Epoch 00541: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3701 - acc: 0.1987 - val_loss: 4.0446 - val_acc: 0.0970
Epoch 542/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3839 - acc: 0.1973Epoch 00542: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3881 - acc: 0.1973 - val_loss: 4.4226 - val_acc: 0.0515
Epoch 543/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3805 - acc: 0.2018Epoch 00543: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3809 - acc: 0.2015 - val_loss: 4.1578 - val_acc: 0.0695
Epoch 544/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3867 - acc: 0.2006Epoch 00544: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3848 - acc: 0.2009 - val_loss: 4.1590 - val_acc: 0.0707
Epoch 545/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.4026 - acc: 0.2015Epoch 00545: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.4020 - acc: 0.2018 - val_loss: 4.2347 - val_acc: 0.0719
Epoch 546/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3640 - acc: 0.2120Epoch 00546: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3640 - acc: 0.2118 - val_loss: 4.1369 - val_acc: 0.0731
Epoch 547/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3613 - acc: 0.2093Epoch 00547: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3618 - acc: 0.2093 - val_loss: 4.1975 - val_acc: 0.0778
Epoch 548/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3863 - acc: 0.2011Epoch 00548: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3854 - acc: 0.2013 - val_loss: 4.0839 - val_acc: 0.0814
Epoch 549/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3875 - acc: 0.1970Epoch 00549: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3887 - acc: 0.1972 - val_loss: 4.1833 - val_acc: 0.0671
Epoch 550/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3691 - acc: 0.2098Epoch 00550: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3686 - acc: 0.2097 - val_loss: 4.1275 - val_acc: 0.0790
Epoch 551/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3989 - acc: 0.2044Epoch 00551: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3989 - acc: 0.2045 - val_loss: 4.0959 - val_acc: 0.0850
Epoch 552/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3755 - acc: 0.2072Epoch 00552: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3753 - acc: 0.2070 - val_loss: 4.2421 - val_acc: 0.0719
Epoch 553/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3744 - acc: 0.2114Epoch 00553: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3761 - acc: 0.2112 - val_loss: 4.2976 - val_acc: 0.0623
Epoch 554/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3725 - acc: 0.2054Epoch 00554: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3721 - acc: 0.2054 - val_loss: 4.1660 - val_acc: 0.0743
Epoch 555/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3808 - acc: 0.2033Epoch 00555: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3828 - acc: 0.2031 - val_loss: 4.2273 - val_acc: 0.0671
Epoch 556/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3594 - acc: 0.2069Epoch 00556: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3635 - acc: 0.2066 - val_loss: 4.3489 - val_acc: 0.0539
Epoch 557/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3815 - acc: 0.2078Epoch 00557: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3824 - acc: 0.2078 - val_loss: 4.2136 - val_acc: 0.0838
Epoch 558/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3737 - acc: 0.2090Epoch 00558: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3733 - acc: 0.2088 - val_loss: 4.1570 - val_acc: 0.0766
Epoch 559/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3681 - acc: 0.2119Epoch 00559: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3701 - acc: 0.2118 - val_loss: 4.2523 - val_acc: 0.0683
Epoch 560/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3614 - acc: 0.2068Epoch 00560: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3615 - acc: 0.2069 - val_loss: 4.3062 - val_acc: 0.0623
Epoch 561/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3674 - acc: 0.2116Epoch 00561: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3672 - acc: 0.2112 - val_loss: 4.3064 - val_acc: 0.0671
Epoch 562/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3585 - acc: 0.2078Epoch 00562: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3578 - acc: 0.2078 - val_loss: 4.2190 - val_acc: 0.0731
Epoch 563/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3676 - acc: 0.2045Epoch 00563: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3692 - acc: 0.2042 - val_loss: 4.1312 - val_acc: 0.0838
Epoch 564/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3573 - acc: 0.2065Epoch 00564: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3568 - acc: 0.2063 - val_loss: 4.1595 - val_acc: 0.0826
Epoch 565/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3592 - acc: 0.2092Epoch 00565: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3629 - acc: 0.2085 - val_loss: 4.4022 - val_acc: 0.0491
Epoch 566/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3648 - acc: 0.2098Epoch 00566: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3657 - acc: 0.2093 - val_loss: 4.2759 - val_acc: 0.0695
Epoch 567/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3814 - acc: 0.2018Epoch 00567: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3805 - acc: 0.2019 - val_loss: 4.2562 - val_acc: 0.0719
Epoch 568/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3609 - acc: 0.2072Epoch 00568: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3598 - acc: 0.2072 - val_loss: 4.1518 - val_acc: 0.0790
Epoch 569/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3611 - acc: 0.2056Epoch 00569: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3600 - acc: 0.2055 - val_loss: 4.1740 - val_acc: 0.0802
Epoch 570/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3739 - acc: 0.2066Epoch 00570: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3750 - acc: 0.2064 - val_loss: 4.2315 - val_acc: 0.0743
Epoch 571/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3691 - acc: 0.2125Epoch 00571: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3687 - acc: 0.2124 - val_loss: 4.2638 - val_acc: 0.0719
Epoch 572/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3591 - acc: 0.2047Epoch 00572: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3591 - acc: 0.2042 - val_loss: 4.1648 - val_acc: 0.0802
Epoch 573/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3623 - acc: 0.2117Epoch 00573: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3614 - acc: 0.2123 - val_loss: 4.1766 - val_acc: 0.0874
Epoch 574/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3594 - acc: 0.2095Epoch 00574: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3589 - acc: 0.2096 - val_loss: 4.2083 - val_acc: 0.0754
Epoch 575/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3539 - acc: 0.2083Epoch 00575: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3536 - acc: 0.2078 - val_loss: 4.1776 - val_acc: 0.0838
Epoch 576/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3491 - acc: 0.2108Epoch 00576: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3479 - acc: 0.2111 - val_loss: 4.1279 - val_acc: 0.0778
Epoch 577/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3419 - acc: 0.2093Epoch 00577: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3415 - acc: 0.2093 - val_loss: 4.1250 - val_acc: 0.0766
Epoch 578/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3555 - acc: 0.2101Epoch 00578: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3566 - acc: 0.2102 - val_loss: 4.2412 - val_acc: 0.0731
Epoch 579/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3631 - acc: 0.2027Epoch 00579: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3625 - acc: 0.2024 - val_loss: 4.2528 - val_acc: 0.0802
Epoch 580/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3663 - acc: 0.2102Epoch 00580: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3645 - acc: 0.2108 - val_loss: 4.2291 - val_acc: 0.0647
Epoch 581/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3603 - acc: 0.2065Epoch 00581: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3590 - acc: 0.2066 - val_loss: 4.1817 - val_acc: 0.0766
Epoch 582/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3717 - acc: 0.2050Epoch 00582: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3736 - acc: 0.2051 - val_loss: 4.2771 - val_acc: 0.0671
Epoch 583/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3473 - acc: 0.2152Epoch 00583: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3469 - acc: 0.2154 - val_loss: 4.1339 - val_acc: 0.0814
Epoch 584/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3525 - acc: 0.2128Epoch 00584: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3545 - acc: 0.2124 - val_loss: 4.2360 - val_acc: 0.0659
Epoch 585/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3640 - acc: 0.2057Epoch 00585: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3653 - acc: 0.2055 - val_loss: 4.3298 - val_acc: 0.0575
Epoch 586/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3628 - acc: 0.2051Epoch 00586: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3635 - acc: 0.2052 - val_loss: 4.2494 - val_acc: 0.0707
Epoch 587/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3635 - acc: 0.2122Epoch 00587: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3642 - acc: 0.2121 - val_loss: 4.2145 - val_acc: 0.0695
Epoch 588/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3844 - acc: 0.2090Epoch 00588: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3837 - acc: 0.2085 - val_loss: 4.1781 - val_acc: 0.0754
Epoch 589/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3412 - acc: 0.2120Epoch 00589: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3408 - acc: 0.2123 - val_loss: 4.1993 - val_acc: 0.0659
Epoch 590/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3497 - acc: 0.2125Epoch 00590: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3529 - acc: 0.2121 - val_loss: 4.3786 - val_acc: 0.0623
Epoch 591/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3542 - acc: 0.2065Epoch 00591: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3540 - acc: 0.2069 - val_loss: 4.0118 - val_acc: 0.1030
Epoch 592/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3537 - acc: 0.2098Epoch 00592: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3524 - acc: 0.2097 - val_loss: 4.1489 - val_acc: 0.0814
Epoch 593/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3613 - acc: 0.2105Epoch 00593: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3617 - acc: 0.2103 - val_loss: 4.2022 - val_acc: 0.0731
Epoch 594/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3567 - acc: 0.2032Epoch 00594: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3568 - acc: 0.2031 - val_loss: 4.1878 - val_acc: 0.0826
Epoch 595/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3661 - acc: 0.2107Epoch 00595: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3647 - acc: 0.2108 - val_loss: 4.0299 - val_acc: 0.0970
Epoch 596/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3644 - acc: 0.2048Epoch 00596: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3618 - acc: 0.2051 - val_loss: 4.2146 - val_acc: 0.0659
Epoch 597/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3644 - acc: 0.2038Epoch 00597: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3667 - acc: 0.2036 - val_loss: 4.2309 - val_acc: 0.0719
Epoch 598/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3761 - acc: 0.2050Epoch 00598: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3774 - acc: 0.2049 - val_loss: 4.3785 - val_acc: 0.0587
Epoch 599/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3690 - acc: 0.2086Epoch 00599: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3687 - acc: 0.2084 - val_loss: 4.1869 - val_acc: 0.0802
Epoch 600/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3443 - acc: 0.2092Epoch 00600: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3467 - acc: 0.2088 - val_loss: 4.3100 - val_acc: 0.0611
Epoch 601/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3653 - acc: 0.2101Epoch 00601: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3651 - acc: 0.2100 - val_loss: 4.2845 - val_acc: 0.0683
Epoch 602/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3573 - acc: 0.2062Epoch 00602: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3571 - acc: 0.2063 - val_loss: 4.3219 - val_acc: 0.0671
Epoch 603/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3559 - acc: 0.2050Epoch 00603: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3555 - acc: 0.2046 - val_loss: 4.2659 - val_acc: 0.0647
Epoch 604/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3581 - acc: 0.2063Epoch 00604: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3579 - acc: 0.2061 - val_loss: 4.2826 - val_acc: 0.0587
Epoch 605/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3477 - acc: 0.2089Epoch 00605: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3470 - acc: 0.2093 - val_loss: 4.2126 - val_acc: 0.0731
Epoch 606/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3549 - acc: 0.2134Epoch 00606: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3548 - acc: 0.2136 - val_loss: 4.1712 - val_acc: 0.0814
Epoch 607/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3547 - acc: 0.2057Epoch 00607: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3557 - acc: 0.2058 - val_loss: 4.2955 - val_acc: 0.0587
Epoch 608/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3661 - acc: 0.2074Epoch 00608: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3660 - acc: 0.2073 - val_loss: 4.2779 - val_acc: 0.0623
Epoch 609/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3438 - acc: 0.2075Epoch 00609: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3455 - acc: 0.2075 - val_loss: 4.1869 - val_acc: 0.0814
Epoch 610/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3423 - acc: 0.2092Epoch 00610: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3429 - acc: 0.2093 - val_loss: 4.2091 - val_acc: 0.0778
Epoch 611/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3433 - acc: 0.2120Epoch 00611: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3431 - acc: 0.2121 - val_loss: 4.2650 - val_acc: 0.0671
Epoch 612/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3517 - acc: 0.2096Epoch 00612: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3516 - acc: 0.2097 - val_loss: 4.2406 - val_acc: 0.0754
Epoch 613/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3421 - acc: 0.2174Epoch 00613: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3419 - acc: 0.2172 - val_loss: 4.2588 - val_acc: 0.0647
Epoch 614/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3513 - acc: 0.2098Epoch 00614: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3505 - acc: 0.2099 - val_loss: 4.2371 - val_acc: 0.0695
Epoch 615/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3661 - acc: 0.2014Epoch 00615: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3655 - acc: 0.2013 - val_loss: 4.1377 - val_acc: 0.0802
Epoch 616/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3470 - acc: 0.2140Epoch 00616: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3469 - acc: 0.2139 - val_loss: 4.1983 - val_acc: 0.0814
Epoch 617/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3630 - acc: 0.2074Epoch 00617: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3635 - acc: 0.2072 - val_loss: 4.2406 - val_acc: 0.0790
Epoch 618/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3357 - acc: 0.2122Epoch 00618: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3353 - acc: 0.2124 - val_loss: 4.2749 - val_acc: 0.0707
Epoch 619/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3332 - acc: 0.2078Epoch 00619: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3339 - acc: 0.2078 - val_loss: 4.2441 - val_acc: 0.0671
Epoch 620/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3524 - acc: 0.2150Epoch 00620: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3515 - acc: 0.2151 - val_loss: 4.0733 - val_acc: 0.0910
Epoch 621/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3467 - acc: 0.2125Epoch 00621: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3463 - acc: 0.2124 - val_loss: 4.2276 - val_acc: 0.0707
Epoch 622/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3406 - acc: 0.2147Epoch 00622: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3405 - acc: 0.2145 - val_loss: 4.2081 - val_acc: 0.0814
Epoch 623/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3414 - acc: 0.2072Epoch 00623: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3422 - acc: 0.2067 - val_loss: 4.2079 - val_acc: 0.0790
Epoch 624/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3578 - acc: 0.2090Epoch 00624: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3587 - acc: 0.2090 - val_loss: 4.3459 - val_acc: 0.0563
Epoch 625/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3578 - acc: 0.2095Epoch 00625: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3566 - acc: 0.2094 - val_loss: 4.1829 - val_acc: 0.0862
Epoch 626/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3362 - acc: 0.2138Epoch 00626: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3355 - acc: 0.2139 - val_loss: 4.1209 - val_acc: 0.0958
Epoch 627/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3495 - acc: 0.2021Epoch 00627: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3490 - acc: 0.2025 - val_loss: 4.1791 - val_acc: 0.0826
Epoch 628/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3491 - acc: 0.2029Epoch 00628: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3484 - acc: 0.2030 - val_loss: 4.2410 - val_acc: 0.0707
Epoch 629/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3457 - acc: 0.2089Epoch 00629: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3460 - acc: 0.2091 - val_loss: 4.2942 - val_acc: 0.0635
Epoch 630/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3371 - acc: 0.2096Epoch 00630: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3379 - acc: 0.2099 - val_loss: 4.2090 - val_acc: 0.0778
Epoch 631/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3312 - acc: 0.2141Epoch 00631: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3347 - acc: 0.2136 - val_loss: 4.3809 - val_acc: 0.0515
Epoch 632/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3425 - acc: 0.2120Epoch 00632: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3444 - acc: 0.2120 - val_loss: 4.2015 - val_acc: 0.0707
Epoch 633/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3393 - acc: 0.2125Epoch 00633: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3398 - acc: 0.2127 - val_loss: 4.0403 - val_acc: 0.0862
Epoch 634/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3536 - acc: 0.2104Epoch 00634: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3545 - acc: 0.2102 - val_loss: 4.1097 - val_acc: 0.0826
Epoch 635/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3377 - acc: 0.2119Epoch 00635: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3380 - acc: 0.2120 - val_loss: 4.2534 - val_acc: 0.0707
Epoch 636/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3541 - acc: 0.2089Epoch 00636: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3548 - acc: 0.2085 - val_loss: 4.1971 - val_acc: 0.0766
Epoch 637/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3221 - acc: 0.2105Epoch 00637: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3231 - acc: 0.2103 - val_loss: 4.2501 - val_acc: 0.0683
Epoch 638/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3276 - acc: 0.2138Epoch 00638: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3272 - acc: 0.2141 - val_loss: 4.1390 - val_acc: 0.0743
Epoch 639/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3550 - acc: 0.2108Epoch 00639: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3564 - acc: 0.2106 - val_loss: 4.1538 - val_acc: 0.0778
Epoch 640/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3447 - acc: 0.2123Epoch 00640: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3448 - acc: 0.2120 - val_loss: 4.2224 - val_acc: 0.0683
Epoch 641/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3460 - acc: 0.2140Epoch 00641: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3449 - acc: 0.2142 - val_loss: 4.2161 - val_acc: 0.0754
Epoch 642/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3406 - acc: 0.2101Epoch 00642: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3424 - acc: 0.2100 - val_loss: 4.3072 - val_acc: 0.0563
Epoch 643/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3366 - acc: 0.2122Epoch 00643: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3380 - acc: 0.2120 - val_loss: 4.1590 - val_acc: 0.0766
Epoch 644/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3493 - acc: 0.2080Epoch 00644: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3485 - acc: 0.2081 - val_loss: 4.2044 - val_acc: 0.0814
Epoch 645/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3280 - acc: 0.2159Epoch 00645: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3267 - acc: 0.2163 - val_loss: 4.2774 - val_acc: 0.0719
Epoch 646/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3379 - acc: 0.2168Epoch 00646: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3377 - acc: 0.2166 - val_loss: 4.2096 - val_acc: 0.0850
Epoch 647/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3456 - acc: 0.2072Epoch 00647: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3464 - acc: 0.2070 - val_loss: 4.1532 - val_acc: 0.0707
Epoch 648/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3474 - acc: 0.2098Epoch 00648: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3467 - acc: 0.2096 - val_loss: 4.2791 - val_acc: 0.0635
Epoch 649/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3341 - acc: 0.2060Epoch 00649: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3340 - acc: 0.2060 - val_loss: 4.2183 - val_acc: 0.0707
Epoch 650/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3177 - acc: 0.2147Epoch 00650: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3185 - acc: 0.2150 - val_loss: 4.3060 - val_acc: 0.0587
Epoch 651/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3364 - acc: 0.2197Epoch 00651: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3355 - acc: 0.2201 - val_loss: 4.2557 - val_acc: 0.0599
Epoch 652/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3278 - acc: 0.2207Epoch 00652: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3299 - acc: 0.2207 - val_loss: 4.4672 - val_acc: 0.0479
Epoch 653/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3320 - acc: 0.2141Epoch 00653: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3326 - acc: 0.2141 - val_loss: 4.2730 - val_acc: 0.0766
Epoch 654/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3316 - acc: 0.2080Epoch 00654: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3319 - acc: 0.2079 - val_loss: 4.0826 - val_acc: 0.0886
Epoch 655/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3246 - acc: 0.2066Epoch 00655: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3240 - acc: 0.2066 - val_loss: 4.1966 - val_acc: 0.0719
Epoch 656/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3611 - acc: 0.2114Epoch 00656: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3615 - acc: 0.2111 - val_loss: 4.1517 - val_acc: 0.0826
Epoch 657/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3379 - acc: 0.2096Epoch 00657: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3383 - acc: 0.2096 - val_loss: 4.2483 - val_acc: 0.0695
Epoch 658/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3297 - acc: 0.2195Epoch 00658: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3308 - acc: 0.2198 - val_loss: 4.2438 - val_acc: 0.0707
Epoch 659/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3285 - acc: 0.2177Epoch 00659: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3276 - acc: 0.2178 - val_loss: 4.2166 - val_acc: 0.0695
Epoch 660/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3577 - acc: 0.2111Epoch 00660: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3567 - acc: 0.2114 - val_loss: 4.2065 - val_acc: 0.0754
Epoch 661/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3265 - acc: 0.2113Epoch 00661: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3287 - acc: 0.2108 - val_loss: 4.1378 - val_acc: 0.0910
Epoch 662/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3298 - acc: 0.2167Epoch 00662: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3317 - acc: 0.2162 - val_loss: 4.2170 - val_acc: 0.0754
Epoch 663/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3285 - acc: 0.2162Epoch 00663: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3290 - acc: 0.2160 - val_loss: 4.1042 - val_acc: 0.0862
Epoch 664/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3244 - acc: 0.2104Epoch 00664: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3235 - acc: 0.2106 - val_loss: 4.1692 - val_acc: 0.0838
Epoch 665/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3214 - acc: 0.2132Epoch 00665: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3221 - acc: 0.2130 - val_loss: 4.2299 - val_acc: 0.0778
Epoch 666/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3341 - acc: 0.2123Epoch 00666: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3329 - acc: 0.2126 - val_loss: 4.2992 - val_acc: 0.0575
Epoch 667/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3141 - acc: 0.2134Epoch 00667: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3128 - acc: 0.2138 - val_loss: 4.1913 - val_acc: 0.0802
Epoch 668/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3308 - acc: 0.2093Epoch 00668: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3290 - acc: 0.2102 - val_loss: 4.0013 - val_acc: 0.0970
Epoch 669/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3395 - acc: 0.2110Epoch 00669: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3362 - acc: 0.2117 - val_loss: 4.0514 - val_acc: 0.0946
Epoch 670/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3253 - acc: 0.2110Epoch 00670: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3238 - acc: 0.2114 - val_loss: 4.1102 - val_acc: 0.0922
Epoch 671/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3397 - acc: 0.2116Epoch 00671: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3410 - acc: 0.2114 - val_loss: 4.1509 - val_acc: 0.0814
Epoch 672/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3410 - acc: 0.2111Epoch 00672: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3398 - acc: 0.2109 - val_loss: 4.0894 - val_acc: 0.0814
Epoch 673/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3220 - acc: 0.2132Epoch 00673: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3213 - acc: 0.2133 - val_loss: 4.1451 - val_acc: 0.0934
Epoch 674/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3341 - acc: 0.2125Epoch 00674: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3354 - acc: 0.2121 - val_loss: 4.2198 - val_acc: 0.0814
Epoch 675/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3206 - acc: 0.2080Epoch 00675: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3183 - acc: 0.2081 - val_loss: 4.1126 - val_acc: 0.0994
Epoch 676/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3227 - acc: 0.2183Epoch 00676: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3210 - acc: 0.2183 - val_loss: 4.1882 - val_acc: 0.0731
Epoch 677/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3346 - acc: 0.2075Epoch 00677: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3347 - acc: 0.2072 - val_loss: 4.2114 - val_acc: 0.0731
Epoch 678/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3335 - acc: 0.2161Epoch 00678: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3348 - acc: 0.2157 - val_loss: 4.1560 - val_acc: 0.0671
Epoch 679/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3314 - acc: 0.2137Epoch 00679: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3321 - acc: 0.2138 - val_loss: 4.0587 - val_acc: 0.1006
Epoch 680/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3327 - acc: 0.2140Epoch 00680: val_loss did not improve
6680/6680 [==============================] - 21s 3ms/step - loss: 3.3330 - acc: 0.2138 - val_loss: 4.1010 - val_acc: 0.0958
Epoch 681/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3258 - acc: 0.2047Epoch 00681: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3270 - acc: 0.2045 - val_loss: 4.2742 - val_acc: 0.0802
Epoch 682/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3190 - acc: 0.2141Epoch 00682: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3202 - acc: 0.2142 - val_loss: 4.3449 - val_acc: 0.0575
Epoch 683/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3483 - acc: 0.2048Epoch 00683: val_loss did not improve
6680/6680 [==============================] - 21s 3ms/step - loss: 3.3483 - acc: 0.2049 - val_loss: 4.1465 - val_acc: 0.0766
Epoch 684/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3399 - acc: 0.2165Epoch 00684: val_loss did not improve
6680/6680 [==============================] - 21s 3ms/step - loss: 3.3390 - acc: 0.2166 - val_loss: 4.2674 - val_acc: 0.0683
Epoch 685/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3330 - acc: 0.2117Epoch 00685: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3338 - acc: 0.2114 - val_loss: 4.2771 - val_acc: 0.0683
Epoch 686/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3229 - acc: 0.2081Epoch 00686: val_loss did not improve
6680/6680 [==============================] - 21s 3ms/step - loss: 3.3238 - acc: 0.2081 - val_loss: 4.2780 - val_acc: 0.0707
Epoch 687/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3145 - acc: 0.2206Epoch 00687: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3158 - acc: 0.2204 - val_loss: 4.2294 - val_acc: 0.0778
Epoch 688/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3601 - acc: 0.2128Epoch 00688: val_loss did not improve
6680/6680 [==============================] - 21s 3ms/step - loss: 3.3598 - acc: 0.2126 - val_loss: 4.2729 - val_acc: 0.0671
Epoch 689/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3303 - acc: 0.2134Epoch 00689: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3292 - acc: 0.2139 - val_loss: 4.2881 - val_acc: 0.0695
Epoch 690/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3430 - acc: 0.2158Epoch 00690: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3427 - acc: 0.2157 - val_loss: 4.2762 - val_acc: 0.0731
Epoch 691/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3361 - acc: 0.2152Epoch 00691: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3359 - acc: 0.2151 - val_loss: 4.1867 - val_acc: 0.0743
Epoch 692/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3118 - acc: 0.2110Epoch 00692: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3142 - acc: 0.2105 - val_loss: 4.4450 - val_acc: 0.0635
Epoch 693/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3513 - acc: 0.2104Epoch 00693: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3507 - acc: 0.2103 - val_loss: 4.1334 - val_acc: 0.0874
Epoch 694/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3369 - acc: 0.2087Epoch 00694: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3345 - acc: 0.2091 - val_loss: 4.2779 - val_acc: 0.0659
Epoch 695/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3215 - acc: 0.2165Epoch 00695: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3214 - acc: 0.2165 - val_loss: 4.1886 - val_acc: 0.0778
Epoch 696/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3264 - acc: 0.2185Epoch 00696: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3252 - acc: 0.2186 - val_loss: 4.1766 - val_acc: 0.0850
Epoch 697/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3423 - acc: 0.2111Epoch 00697: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3422 - acc: 0.2108 - val_loss: 4.1319 - val_acc: 0.0826
Epoch 698/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3575 - acc: 0.2081Epoch 00698: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3578 - acc: 0.2081 - val_loss: 4.2376 - val_acc: 0.0754
Epoch 699/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3385 - acc: 0.2006Epoch 00699: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3387 - acc: 0.2004 - val_loss: 4.1826 - val_acc: 0.0707
Epoch 700/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3498 - acc: 0.2057Epoch 00700: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3498 - acc: 0.2055 - val_loss: 4.2962 - val_acc: 0.0635
Epoch 701/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3277 - acc: 0.2083Epoch 00701: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3268 - acc: 0.2082 - val_loss: 4.3286 - val_acc: 0.0599
Epoch 702/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3154 - acc: 0.2192Epoch 00702: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3150 - acc: 0.2192 - val_loss: 4.1091 - val_acc: 0.0802
Epoch 703/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3293 - acc: 0.2105Epoch 00703: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3324 - acc: 0.2102 - val_loss: 4.3468 - val_acc: 0.0611
Epoch 704/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3437 - acc: 0.2183Epoch 00704: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3437 - acc: 0.2181 - val_loss: 4.2678 - val_acc: 0.0659
Epoch 705/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3378 - acc: 0.2120Epoch 00705: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3389 - acc: 0.2121 - val_loss: 4.3028 - val_acc: 0.0611
Epoch 706/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3388 - acc: 0.2093Epoch 00706: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3384 - acc: 0.2094 - val_loss: 4.0931 - val_acc: 0.0850
Epoch 707/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3197 - acc: 0.2134Epoch 00707: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3191 - acc: 0.2135 - val_loss: 4.2358 - val_acc: 0.0802
Epoch 708/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3351 - acc: 0.2171Epoch 00708: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3354 - acc: 0.2168 - val_loss: 4.2019 - val_acc: 0.0778
Epoch 709/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3539 - acc: 0.2179Epoch 00709: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3539 - acc: 0.2181 - val_loss: 4.1620 - val_acc: 0.0886
Epoch 710/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3445 - acc: 0.2077Epoch 00710: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3430 - acc: 0.2079 - val_loss: 4.0743 - val_acc: 0.0814
Epoch 711/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3384 - acc: 0.2159Epoch 00711: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3388 - acc: 0.2156 - val_loss: 4.0814 - val_acc: 0.0910
Epoch 712/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3277 - acc: 0.2059Epoch 00712: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3295 - acc: 0.2054 - val_loss: 4.3241 - val_acc: 0.0719
Epoch 713/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3289 - acc: 0.2116Epoch 00713: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3299 - acc: 0.2114 - val_loss: 4.1895 - val_acc: 0.0778
Epoch 714/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3407 - acc: 0.2101Epoch 00714: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3406 - acc: 0.2100 - val_loss: 4.0690 - val_acc: 0.0898
Epoch 715/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3376 - acc: 0.2120Epoch 00715: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3380 - acc: 0.2117 - val_loss: 4.1593 - val_acc: 0.0862
Epoch 716/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3393 - acc: 0.2173Epoch 00716: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3374 - acc: 0.2174 - val_loss: 4.1946 - val_acc: 0.0886
Epoch 717/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3325 - acc: 0.2080Epoch 00717: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3324 - acc: 0.2079 - val_loss: 4.2216 - val_acc: 0.0707
Epoch 718/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3475 - acc: 0.2128Epoch 00718: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3467 - acc: 0.2126 - val_loss: 4.2029 - val_acc: 0.0802
Epoch 719/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3361 - acc: 0.2098Epoch 00719: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3359 - acc: 0.2099 - val_loss: 4.1979 - val_acc: 0.0754
Epoch 720/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3489 - acc: 0.2134Epoch 00720: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3482 - acc: 0.2135 - val_loss: 4.1731 - val_acc: 0.0814
Epoch 721/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3364 - acc: 0.2176Epoch 00721: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3392 - acc: 0.2171 - val_loss: 4.2280 - val_acc: 0.0778
Epoch 722/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3221 - acc: 0.2138Epoch 00722: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3230 - acc: 0.2139 - val_loss: 4.2495 - val_acc: 0.0766
Epoch 723/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3464 - acc: 0.2090Epoch 00723: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3474 - acc: 0.2090 - val_loss: 4.3108 - val_acc: 0.0683
Epoch 724/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3389 - acc: 0.2054Epoch 00724: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3383 - acc: 0.2052 - val_loss: 4.2675 - val_acc: 0.0695
Epoch 725/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3361 - acc: 0.2107Epoch 00725: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3352 - acc: 0.2106 - val_loss: 4.0566 - val_acc: 0.0958
Epoch 726/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3291 - acc: 0.2149Epoch 00726: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3289 - acc: 0.2151 - val_loss: 4.1924 - val_acc: 0.0802
Epoch 727/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3291 - acc: 0.2135Epoch 00727: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3283 - acc: 0.2133 - val_loss: 4.1709 - val_acc: 0.0886
Epoch 728/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3189 - acc: 0.2146Epoch 00728: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3181 - acc: 0.2145 - val_loss: 4.0574 - val_acc: 0.1126
Epoch 729/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3231 - acc: 0.2158Epoch 00729: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3216 - acc: 0.2162 - val_loss: 4.0941 - val_acc: 0.0754
Epoch 730/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3334 - acc: 0.2164Epoch 00730: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3332 - acc: 0.2163 - val_loss: 4.1576 - val_acc: 0.0766
Epoch 731/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3422 - acc: 0.2101Epoch 00731: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3449 - acc: 0.2100 - val_loss: 4.1343 - val_acc: 0.0743
Epoch 732/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3490 - acc: 0.2123Epoch 00732: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3508 - acc: 0.2117 - val_loss: 4.0919 - val_acc: 0.0910
Epoch 733/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3201 - acc: 0.2039Epoch 00733: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3188 - acc: 0.2043 - val_loss: 4.2265 - val_acc: 0.0802
Epoch 734/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3225 - acc: 0.2104Epoch 00734: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3224 - acc: 0.2105 - val_loss: 4.0844 - val_acc: 0.0719
Epoch 735/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3419 - acc: 0.2129Epoch 00735: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3432 - acc: 0.2126 - val_loss: 4.1324 - val_acc: 0.0922
Epoch 736/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3372 - acc: 0.2068Epoch 00736: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3371 - acc: 0.2066 - val_loss: 4.1760 - val_acc: 0.0862
Epoch 737/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3120 - acc: 0.2153Epoch 00737: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3116 - acc: 0.2151 - val_loss: 4.1892 - val_acc: 0.0766
Epoch 738/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3259 - acc: 0.2096Epoch 00738: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3270 - acc: 0.2097 - val_loss: 4.0041 - val_acc: 0.1042
Epoch 739/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3108 - acc: 0.2194Epoch 00739: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3104 - acc: 0.2195 - val_loss: 4.2148 - val_acc: 0.0778
Epoch 740/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3379 - acc: 0.2174Epoch 00740: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3373 - acc: 0.2175 - val_loss: 4.1179 - val_acc: 0.0862
Epoch 741/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3466 - acc: 0.2068Epoch 00741: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3479 - acc: 0.2066 - val_loss: 4.2517 - val_acc: 0.0719
Epoch 742/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3153 - acc: 0.2171Epoch 00742: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3132 - acc: 0.2172 - val_loss: 4.0652 - val_acc: 0.0982
Epoch 743/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3468 - acc: 0.2087Epoch 00743: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3469 - acc: 0.2087 - val_loss: 4.1306 - val_acc: 0.0898
Epoch 744/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3395 - acc: 0.2171Epoch 00744: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3387 - acc: 0.2172 - val_loss: 4.1215 - val_acc: 0.0826
Epoch 745/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3219 - acc: 0.2170Epoch 00745: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3208 - acc: 0.2172 - val_loss: 4.2013 - val_acc: 0.0778
Epoch 746/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3347 - acc: 0.2131Epoch 00746: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3350 - acc: 0.2132 - val_loss: 4.0990 - val_acc: 0.0790
Epoch 747/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3560 - acc: 0.2089Epoch 00747: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3559 - acc: 0.2087 - val_loss: 4.2119 - val_acc: 0.0743
Epoch 748/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3295 - acc: 0.2161Epoch 00748: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3300 - acc: 0.2159 - val_loss: 4.1936 - val_acc: 0.0778
Epoch 749/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3350 - acc: 0.2063Epoch 00749: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3361 - acc: 0.2066 - val_loss: 4.1996 - val_acc: 0.0707
Epoch 750/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3324 - acc: 0.2161Epoch 00750: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3338 - acc: 0.2157 - val_loss: 4.2411 - val_acc: 0.0838
Epoch 751/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3379 - acc: 0.2147Epoch 00751: val_loss did not improve
6680/6680 [==============================] - 21s 3ms/step - loss: 3.3374 - acc: 0.2145 - val_loss: 4.2255 - val_acc: 0.0683
Epoch 752/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3417 - acc: 0.2113Epoch 00752: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3434 - acc: 0.2112 - val_loss: 4.2533 - val_acc: 0.0743
Epoch 753/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3377 - acc: 0.2150Epoch 00753: val_loss did not improve
6680/6680 [==============================] - 21s 3ms/step - loss: 3.3389 - acc: 0.2151 - val_loss: 4.1128 - val_acc: 0.0790
Epoch 754/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3473 - acc: 0.2116Epoch 00754: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3484 - acc: 0.2112 - val_loss: 4.0759 - val_acc: 0.0934
Epoch 755/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3313 - acc: 0.2182Epoch 00755: val_loss did not improve
6680/6680 [==============================] - 21s 3ms/step - loss: 3.3309 - acc: 0.2178 - val_loss: 4.2474 - val_acc: 0.0754
Epoch 756/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3550 - acc: 0.2035Epoch 00756: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3545 - acc: 0.2033 - val_loss: 4.2584 - val_acc: 0.0838
Epoch 757/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3277 - acc: 0.2162Epoch 00757: val_loss did not improve
6680/6680 [==============================] - 21s 3ms/step - loss: 3.3274 - acc: 0.2159 - val_loss: 4.0614 - val_acc: 0.1006
Epoch 758/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3261 - acc: 0.2174Epoch 00758: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3248 - acc: 0.2178 - val_loss: 4.0535 - val_acc: 0.0898
Epoch 759/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3423 - acc: 0.2072Epoch 00759: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3427 - acc: 0.2073 - val_loss: 4.2531 - val_acc: 0.0778
Epoch 760/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3464 - acc: 0.2104Epoch 00760: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3459 - acc: 0.2108 - val_loss: 4.1247 - val_acc: 0.0719
Epoch 761/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3444 - acc: 0.2104Epoch 00761: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3452 - acc: 0.2102 - val_loss: 4.2796 - val_acc: 0.0683
Epoch 762/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3339 - acc: 0.2123Epoch 00762: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3340 - acc: 0.2120 - val_loss: 4.3507 - val_acc: 0.0683
Epoch 763/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3727 - acc: 0.2092Epoch 00763: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3710 - acc: 0.2099 - val_loss: 4.2594 - val_acc: 0.0743
Epoch 764/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3347 - acc: 0.2081Epoch 00764: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3346 - acc: 0.2081 - val_loss: 4.1606 - val_acc: 0.0862
Epoch 765/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3411 - acc: 0.2107Epoch 00765: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3415 - acc: 0.2108 - val_loss: 4.2590 - val_acc: 0.0707
Epoch 766/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3415 - acc: 0.2144Epoch 00766: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3414 - acc: 0.2144 - val_loss: 4.1251 - val_acc: 0.0874
Epoch 767/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3557 - acc: 0.2110Epoch 00767: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3555 - acc: 0.2108 - val_loss: 4.1807 - val_acc: 0.0898
Epoch 768/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3454 - acc: 0.2077Epoch 00768: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3443 - acc: 0.2082 - val_loss: 4.0857 - val_acc: 0.0970
Epoch 769/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3382 - acc: 0.2060Epoch 00769: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3377 - acc: 0.2055 - val_loss: 4.1083 - val_acc: 0.0922
Epoch 770/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3432 - acc: 0.2032Epoch 00770: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3436 - acc: 0.2034 - val_loss: 4.2389 - val_acc: 0.0766
Epoch 771/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3311 - acc: 0.2138Epoch 00771: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3323 - acc: 0.2135 - val_loss: 4.3553 - val_acc: 0.0635
Epoch 772/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3443 - acc: 0.2074Epoch 00772: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3465 - acc: 0.2070 - val_loss: 4.2680 - val_acc: 0.0707
Epoch 773/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3496 - acc: 0.1997Epoch 00773: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3483 - acc: 0.1999 - val_loss: 4.0882 - val_acc: 0.0838
Epoch 774/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3513 - acc: 0.2056Epoch 00774: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3501 - acc: 0.2060 - val_loss: 4.1714 - val_acc: 0.0766
Epoch 775/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3347 - acc: 0.2110Epoch 00775: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3346 - acc: 0.2111 - val_loss: 4.1457 - val_acc: 0.0886
Epoch 776/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3244 - acc: 0.2072Epoch 00776: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3230 - acc: 0.2073 - val_loss: 4.0999 - val_acc: 0.0910
Epoch 777/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3321 - acc: 0.2099Epoch 00777: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3322 - acc: 0.2097 - val_loss: 4.3082 - val_acc: 0.0599
Epoch 778/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3385 - acc: 0.2054Epoch 00778: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3377 - acc: 0.2054 - val_loss: 4.2435 - val_acc: 0.0695
Epoch 779/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3331 - acc: 0.2081Epoch 00779: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3332 - acc: 0.2081 - val_loss: 4.1697 - val_acc: 0.0898
Epoch 780/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3345 - acc: 0.2131Epoch 00780: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3346 - acc: 0.2129 - val_loss: 4.1516 - val_acc: 0.0790
Epoch 781/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3182 - acc: 0.2152Epoch 00781: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3170 - acc: 0.2151 - val_loss: 4.1347 - val_acc: 0.0826
Epoch 782/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3354 - acc: 0.2126Epoch 00782: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3341 - acc: 0.2126 - val_loss: 4.1281 - val_acc: 0.0934
Epoch 783/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3373 - acc: 0.2098Epoch 00783: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3376 - acc: 0.2094 - val_loss: 4.2364 - val_acc: 0.0754
Epoch 784/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3232 - acc: 0.2156Epoch 00784: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3206 - acc: 0.2162 - val_loss: 4.2001 - val_acc: 0.0898
Epoch 785/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3325 - acc: 0.2095Epoch 00785: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3321 - acc: 0.2094 - val_loss: 4.2126 - val_acc: 0.0754
Epoch 786/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3607 - acc: 0.2117Epoch 00786: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3619 - acc: 0.2112 - val_loss: 4.2237 - val_acc: 0.0766
Epoch 787/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3633 - acc: 0.2032Epoch 00787: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3623 - acc: 0.2037 - val_loss: 4.2098 - val_acc: 0.0731
Epoch 788/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3345 - acc: 0.2144Epoch 00788: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3348 - acc: 0.2142 - val_loss: 4.2783 - val_acc: 0.0707
Epoch 789/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3351 - acc: 0.2027Epoch 00789: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3350 - acc: 0.2028 - val_loss: 4.1735 - val_acc: 0.0838
Epoch 790/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3389 - acc: 0.2131Epoch 00790: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3396 - acc: 0.2130 - val_loss: 4.2864 - val_acc: 0.0671
Epoch 791/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3423 - acc: 0.2129Epoch 00791: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3426 - acc: 0.2129 - val_loss: 4.2664 - val_acc: 0.0695
Epoch 792/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3516 - acc: 0.2090Epoch 00792: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3524 - acc: 0.2091 - val_loss: 4.2497 - val_acc: 0.0719
Epoch 793/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3391 - acc: 0.2120Epoch 00793: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3392 - acc: 0.2121 - val_loss: 4.2737 - val_acc: 0.0683
Epoch 794/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3492 - acc: 0.2057Epoch 00794: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3499 - acc: 0.2058 - val_loss: 4.1909 - val_acc: 0.0766
Epoch 795/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3539 - acc: 0.2060Epoch 00795: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3543 - acc: 0.2061 - val_loss: 4.3689 - val_acc: 0.0575
Epoch 796/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3354 - acc: 0.2162Epoch 00796: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3357 - acc: 0.2160 - val_loss: 4.2051 - val_acc: 0.0731
Epoch 797/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3446 - acc: 0.2039Epoch 00797: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3433 - acc: 0.2040 - val_loss: 4.1990 - val_acc: 0.0802
Epoch 798/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3528 - acc: 0.2086Epoch 00798: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3530 - acc: 0.2087 - val_loss: 4.1498 - val_acc: 0.0934
Epoch 799/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3659 - acc: 0.2060Epoch 00799: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3650 - acc: 0.2063 - val_loss: 4.1262 - val_acc: 0.0743
Epoch 800/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3370 - acc: 0.2099Epoch 00800: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3378 - acc: 0.2097 - val_loss: 4.2140 - val_acc: 0.0754
Epoch 801/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3492 - acc: 0.2068Epoch 00801: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3513 - acc: 0.2063 - val_loss: 4.0567 - val_acc: 0.0838
Epoch 802/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3483 - acc: 0.2126Epoch 00802: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3477 - acc: 0.2127 - val_loss: 4.3185 - val_acc: 0.0683
Epoch 803/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3458 - acc: 0.2113Epoch 00803: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3467 - acc: 0.2109 - val_loss: 4.2156 - val_acc: 0.0778
Epoch 804/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3240 - acc: 0.2159Epoch 00804: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3233 - acc: 0.2162 - val_loss: 4.0236 - val_acc: 0.0790
Epoch 805/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3612 - acc: 0.2074Epoch 00805: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3611 - acc: 0.2075 - val_loss: 4.1522 - val_acc: 0.0731
Epoch 806/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3487 - acc: 0.2027Epoch 00806: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3497 - acc: 0.2025 - val_loss: 4.2613 - val_acc: 0.0695
Epoch 807/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3292 - acc: 0.2117Epoch 00807: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3291 - acc: 0.2117 - val_loss: 4.2959 - val_acc: 0.0647
Epoch 808/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3431 - acc: 0.2122Epoch 00808: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3430 - acc: 0.2117 - val_loss: 4.2158 - val_acc: 0.0778
Epoch 809/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3436 - acc: 0.2054Epoch 00809: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3445 - acc: 0.2057 - val_loss: 4.2307 - val_acc: 0.0766
Epoch 810/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3565 - acc: 0.2137Epoch 00810: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3542 - acc: 0.2144 - val_loss: 4.0112 - val_acc: 0.1006
Epoch 811/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3355 - acc: 0.2135Epoch 00811: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3348 - acc: 0.2135 - val_loss: 4.2638 - val_acc: 0.0826
Epoch 812/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3364 - acc: 0.2177Epoch 00812: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3353 - acc: 0.2175 - val_loss: 4.1043 - val_acc: 0.0898
Epoch 813/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3711 - acc: 0.2084Epoch 00813: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3709 - acc: 0.2084 - val_loss: 4.2930 - val_acc: 0.0707
Epoch 814/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3493 - acc: 0.1994Epoch 00814: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3478 - acc: 0.2000 - val_loss: 4.0543 - val_acc: 0.0946
Epoch 815/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3419 - acc: 0.2132Epoch 00815: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3405 - acc: 0.2132 - val_loss: 4.1965 - val_acc: 0.0874
Epoch 816/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3487 - acc: 0.2155Epoch 00816: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3493 - acc: 0.2153 - val_loss: 4.2382 - val_acc: 0.0731
Epoch 817/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3486 - acc: 0.2080Epoch 00817: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3516 - acc: 0.2073 - val_loss: 4.2503 - val_acc: 0.0695
Epoch 818/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3454 - acc: 0.2023Epoch 00818: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3472 - acc: 0.2018 - val_loss: 4.2413 - val_acc: 0.0683
Epoch 819/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3507 - acc: 0.2095Epoch 00819: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3505 - acc: 0.2094 - val_loss: 4.1533 - val_acc: 0.0802
Epoch 820/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3494 - acc: 0.2084Epoch 00820: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3497 - acc: 0.2082 - val_loss: 4.3198 - val_acc: 0.0707
Epoch 821/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3481 - acc: 0.2113Epoch 00821: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3478 - acc: 0.2112 - val_loss: 4.2702 - val_acc: 0.0647
Epoch 822/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3455 - acc: 0.2066Epoch 00822: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3436 - acc: 0.2067 - val_loss: 4.1379 - val_acc: 0.0802
Epoch 823/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3522 - acc: 0.2069Epoch 00823: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3530 - acc: 0.2064 - val_loss: 4.2449 - val_acc: 0.0647
Epoch 824/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3439 - acc: 0.2128Epoch 00824: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3451 - acc: 0.2129 - val_loss: 4.2692 - val_acc: 0.0647
Epoch 825/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3610 - acc: 0.2035Epoch 00825: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3635 - acc: 0.2031 - val_loss: 4.4535 - val_acc: 0.0479
Epoch 826/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3609 - acc: 0.2096Epoch 00826: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3613 - acc: 0.2093 - val_loss: 4.2881 - val_acc: 0.0635
Epoch 827/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3334 - acc: 0.2099Epoch 00827: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3318 - acc: 0.2103 - val_loss: 4.0651 - val_acc: 0.0778
Epoch 828/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3654 - acc: 0.2137Epoch 00828: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3645 - acc: 0.2136 - val_loss: 4.1572 - val_acc: 0.0754
Epoch 829/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3356 - acc: 0.2104Epoch 00829: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3341 - acc: 0.2103 - val_loss: 4.1890 - val_acc: 0.0707
Epoch 830/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3686 - acc: 0.2069Epoch 00830: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3695 - acc: 0.2067 - val_loss: 4.1715 - val_acc: 0.0790
Epoch 831/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3506 - acc: 0.2084Epoch 00831: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3509 - acc: 0.2084 - val_loss: 4.3039 - val_acc: 0.0599
Epoch 832/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3549 - acc: 0.2084Epoch 00832: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3573 - acc: 0.2079 - val_loss: 4.3813 - val_acc: 0.0551
Epoch 833/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3528 - acc: 0.2086Epoch 00833: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3522 - acc: 0.2085 - val_loss: 4.1631 - val_acc: 0.0790
Epoch 834/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3617 - acc: 0.2074Epoch 00834: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3599 - acc: 0.2081 - val_loss: 4.1735 - val_acc: 0.0754
Epoch 835/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3497 - acc: 0.2062Epoch 00835: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3502 - acc: 0.2066 - val_loss: 4.1527 - val_acc: 0.0934
Epoch 836/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3620 - acc: 0.2002Epoch 00836: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3626 - acc: 0.2000 - val_loss: 4.2450 - val_acc: 0.0790
Epoch 837/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3505 - acc: 0.2083Epoch 00837: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3502 - acc: 0.2084 - val_loss: 4.1511 - val_acc: 0.0826
Epoch 838/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3611 - acc: 0.2074Epoch 00838: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3606 - acc: 0.2073 - val_loss: 4.0928 - val_acc: 0.0826
Epoch 839/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3680 - acc: 0.2053Epoch 00839: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3682 - acc: 0.2052 - val_loss: 4.3069 - val_acc: 0.0695
Epoch 840/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3482 - acc: 0.2125Epoch 00840: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3504 - acc: 0.2126 - val_loss: 4.3687 - val_acc: 0.0563
Epoch 841/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3368 - acc: 0.2113Epoch 00841: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3366 - acc: 0.2112 - val_loss: 4.2085 - val_acc: 0.0766
Epoch 842/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3574 - acc: 0.2084Epoch 00842: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3577 - acc: 0.2079 - val_loss: 4.3372 - val_acc: 0.0671
Epoch 843/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3536 - acc: 0.2084Epoch 00843: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3535 - acc: 0.2087 - val_loss: 4.0695 - val_acc: 0.0886
Epoch 844/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3676 - acc: 0.2107Epoch 00844: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3671 - acc: 0.2106 - val_loss: 4.2442 - val_acc: 0.0778
Epoch 845/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3573 - acc: 0.2069Epoch 00845: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3572 - acc: 0.2072 - val_loss: 4.1320 - val_acc: 0.0922
Epoch 846/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3681 - acc: 0.2053Epoch 00846: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3689 - acc: 0.2052 - val_loss: 4.2016 - val_acc: 0.0754
Epoch 847/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3833 - acc: 0.2008Epoch 00847: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3832 - acc: 0.2009 - val_loss: 4.2363 - val_acc: 0.0719
Epoch 848/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3722 - acc: 0.2066Epoch 00848: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3726 - acc: 0.2069 - val_loss: 4.3836 - val_acc: 0.0575
Epoch 849/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3480 - acc: 0.2135Epoch 00849: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3489 - acc: 0.2133 - val_loss: 4.3750 - val_acc: 0.0539
Epoch 850/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3688 - acc: 0.2117Epoch 00850: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3696 - acc: 0.2117 - val_loss: 4.3068 - val_acc: 0.0575
Epoch 851/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3548 - acc: 0.2099Epoch 00851: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3547 - acc: 0.2097 - val_loss: 4.2315 - val_acc: 0.0743
Epoch 852/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3720 - acc: 0.2089Epoch 00852: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3727 - acc: 0.2087 - val_loss: 4.2081 - val_acc: 0.0707
Epoch 853/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3695 - acc: 0.2120Epoch 00853: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3686 - acc: 0.2123 - val_loss: 4.1459 - val_acc: 0.0743
Epoch 854/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3695 - acc: 0.2020Epoch 00854: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3685 - acc: 0.2021 - val_loss: 4.2097 - val_acc: 0.0695
Epoch 855/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3720 - acc: 0.2090Epoch 00855: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3699 - acc: 0.2093 - val_loss: 4.1507 - val_acc: 0.0754
Epoch 856/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3742 - acc: 0.2032Epoch 00856: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3739 - acc: 0.2036 - val_loss: 4.2186 - val_acc: 0.0850
Epoch 857/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3806 - acc: 0.2063Epoch 00857: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3788 - acc: 0.2067 - val_loss: 4.1939 - val_acc: 0.1030
Epoch 858/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3649 - acc: 0.2024Epoch 00858: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3644 - acc: 0.2028 - val_loss: 4.3243 - val_acc: 0.0754
Epoch 859/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3724 - acc: 0.2036Epoch 00859: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3714 - acc: 0.2036 - val_loss: 4.1502 - val_acc: 0.0778
Epoch 860/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3719 - acc: 0.2084Epoch 00860: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3730 - acc: 0.2078 - val_loss: 4.1864 - val_acc: 0.0743
Epoch 861/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3555 - acc: 0.2138Epoch 00861: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3542 - acc: 0.2144 - val_loss: 4.1919 - val_acc: 0.0719
Epoch 862/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3784 - acc: 0.2062Epoch 00862: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3796 - acc: 0.2060 - val_loss: 4.0646 - val_acc: 0.0898
Epoch 863/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3611 - acc: 0.2056Epoch 00863: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3615 - acc: 0.2054 - val_loss: 4.2625 - val_acc: 0.0635
Epoch 864/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3467 - acc: 0.2152Epoch 00864: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3468 - acc: 0.2153 - val_loss: 4.2864 - val_acc: 0.0719
Epoch 865/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3722 - acc: 0.1980Epoch 00865: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3716 - acc: 0.1985 - val_loss: 4.1387 - val_acc: 0.0802
Epoch 866/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3764 - acc: 0.2093Epoch 00866: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3774 - acc: 0.2091 - val_loss: 4.1505 - val_acc: 0.0814
Epoch 867/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3743 - acc: 0.2023Epoch 00867: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3724 - acc: 0.2027 - val_loss: 4.1441 - val_acc: 0.0838
Epoch 868/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3545 - acc: 0.2050Epoch 00868: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3539 - acc: 0.2055 - val_loss: 4.3423 - val_acc: 0.0659
Epoch 869/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3710 - acc: 0.2021Epoch 00869: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3719 - acc: 0.2021 - val_loss: 4.2829 - val_acc: 0.0671
Epoch 870/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3559 - acc: 0.2062Epoch 00870: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3548 - acc: 0.2067 - val_loss: 4.2584 - val_acc: 0.0635
Epoch 871/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3462 - acc: 0.2080Epoch 00871: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3457 - acc: 0.2076 - val_loss: 4.1852 - val_acc: 0.0778
Epoch 872/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3489 - acc: 0.2068Epoch 00872: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3496 - acc: 0.2061 - val_loss: 4.3757 - val_acc: 0.0575
Epoch 873/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3642 - acc: 0.2095Epoch 00873: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3653 - acc: 0.2093 - val_loss: 4.3312 - val_acc: 0.0623
Epoch 874/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3768 - acc: 0.1995Epoch 00874: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3768 - acc: 0.1994 - val_loss: 4.2003 - val_acc: 0.0838
Epoch 875/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3861 - acc: 0.2096Epoch 00875: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3859 - acc: 0.2097 - val_loss: 4.1566 - val_acc: 0.0862
Epoch 876/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3644 - acc: 0.1997Epoch 00876: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3645 - acc: 0.1997 - val_loss: 4.2248 - val_acc: 0.0659
Epoch 877/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3534 - acc: 0.2050Epoch 00877: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3542 - acc: 0.2051 - val_loss: 4.4700 - val_acc: 0.0491
Epoch 878/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3956 - acc: 0.2012Epoch 00878: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3967 - acc: 0.2012 - val_loss: 4.1662 - val_acc: 0.0862
Epoch 879/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3805 - acc: 0.2060Epoch 00879: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3792 - acc: 0.2061 - val_loss: 4.2102 - val_acc: 0.0707
Epoch 880/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3855 - acc: 0.2110Epoch 00880: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3867 - acc: 0.2109 - val_loss: 4.4884 - val_acc: 0.0479
Epoch 881/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3819 - acc: 0.2047Epoch 00881: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3817 - acc: 0.2046 - val_loss: 4.2758 - val_acc: 0.0719
Epoch 882/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3705 - acc: 0.2054Epoch 00882: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3685 - acc: 0.2061 - val_loss: 4.0731 - val_acc: 0.0910
Epoch 883/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3695 - acc: 0.1998Epoch 00883: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3701 - acc: 0.1997 - val_loss: 4.5034 - val_acc: 0.0455
Epoch 884/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3844 - acc: 0.2117Epoch 00884: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3848 - acc: 0.2118 - val_loss: 4.1449 - val_acc: 0.0826
Epoch 885/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3878 - acc: 0.2041Epoch 00885: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3873 - acc: 0.2040 - val_loss: 4.3056 - val_acc: 0.0587
Epoch 886/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3724 - acc: 0.2006Epoch 00886: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3711 - acc: 0.2006 - val_loss: 4.2194 - val_acc: 0.0814
Epoch 887/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3824 - acc: 0.2035Epoch 00887: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3822 - acc: 0.2034 - val_loss: 4.4007 - val_acc: 0.0563
Epoch 888/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3803 - acc: 0.2053Epoch 00888: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3812 - acc: 0.2052 - val_loss: 4.1753 - val_acc: 0.0731
Epoch 889/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3849 - acc: 0.2045Epoch 00889: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3840 - acc: 0.2045 - val_loss: 4.2966 - val_acc: 0.0611
Epoch 890/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3786 - acc: 0.2060Epoch 00890: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3792 - acc: 0.2060 - val_loss: 4.2616 - val_acc: 0.0599
Epoch 891/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3782 - acc: 0.2071Epoch 00891: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3767 - acc: 0.2072 - val_loss: 4.1854 - val_acc: 0.0766
Epoch 892/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3724 - acc: 0.2081Epoch 00892: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3734 - acc: 0.2081 - val_loss: 4.1839 - val_acc: 0.0683
Epoch 893/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3925 - acc: 0.1995Epoch 00893: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3918 - acc: 0.1993 - val_loss: 4.2482 - val_acc: 0.0790
Epoch 894/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3771 - acc: 0.2027Epoch 00894: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3767 - acc: 0.2028 - val_loss: 4.3709 - val_acc: 0.0635
Epoch 895/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3868 - acc: 0.2003Epoch 00895: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3873 - acc: 0.2004 - val_loss: 4.3580 - val_acc: 0.0611
Epoch 896/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3730 - acc: 0.2017Epoch 00896: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3747 - acc: 0.2018 - val_loss: 4.2444 - val_acc: 0.0766
Epoch 897/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3704 - acc: 0.2101Epoch 00897: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3704 - acc: 0.2097 - val_loss: 4.2427 - val_acc: 0.0695
Epoch 898/1000
6660/6680 [============================>.] - ETA: 0s - loss: 3.3839 - acc: 0.2032Epoch 00898: val_loss did not improve
6680/6680 [==============================] - 22s 3ms/step - loss: 3.3859 - acc: 0.2030 - val_loss: 4.1482 - val_acc: 0.0790
Epoch 899/1000
2860/6680 [===========>..................] - ETA: 12s - loss: 3.3951 - acc: 0.1993
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
<ipython-input-22-9f558ff8340c> in <module>()
     10 model.fit(train_tensors, train_targets, 
     11           validation_data=(valid_tensors, valid_targets),
---> 12           epochs=epochs, batch_size=20, callbacks=[checkpointer], verbose=1)

/opt/conda/lib/python3.6/site-packages/keras/models.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, **kwargs)
    891                               class_weight=class_weight,
    892                               sample_weight=sample_weight,
--> 893                               initial_epoch=initial_epoch)
    894 
    895     def evaluate(self, x, y, batch_size=32, verbose=1,

/opt/conda/lib/python3.6/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
   1629                               initial_epoch=initial_epoch,
   1630                               steps_per_epoch=steps_per_epoch,
-> 1631                               validation_steps=validation_steps)
   1632 
   1633     def evaluate(self, x=None, y=None,

/opt/conda/lib/python3.6/site-packages/keras/engine/training.py in _fit_loop(self, f, ins, out_labels, batch_size, epochs, verbose, callbacks, val_f, val_ins, shuffle, callback_metrics, initial_epoch, steps_per_epoch, validation_steps)
   1211                     batch_logs['size'] = len(batch_ids)
   1212                     callbacks.on_batch_begin(batch_index, batch_logs)
-> 1213                     outs = f(ins_batch)
   1214                     if not isinstance(outs, list):
   1215                         outs = [outs]

/opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py in __call__(self, inputs)
   2330         updated = session.run(self.outputs + [self.updates_op],
   2331                               feed_dict=feed_dict,
-> 2332                               **self.session_kwargs)
   2333         return updated[:len(self.outputs)]
   2334 

/opt/conda/lib/python3.6/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    893     try:
    894       result = self._run(None, fetches, feed_dict, options_ptr,
--> 895                          run_metadata_ptr)
    896       if run_metadata:
    897         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/opt/conda/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
   1122     if final_fetches or final_targets or (handle and feed_dict_tensor):
   1123       results = self._do_run(handle, final_targets, final_fetches,
-> 1124                              feed_dict_tensor, options, run_metadata)
   1125     else:
   1126       results = []

/opt/conda/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1319     if handle is None:
   1320       return self._do_call(_run_fn, self._session, feeds, fetches, targets,
-> 1321                            options, run_metadata)
   1322     else:
   1323       return self._do_call(_prun_fn, self._session, handle, feeds, fetches)

/opt/conda/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1325   def _do_call(self, fn, *args):
   1326     try:
-> 1327       return fn(*args)
   1328     except errors.OpError as e:
   1329       message = compat.as_text(e.message)

/opt/conda/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
   1304           return tf_session.TF_Run(session, options,
   1305                                    feed_dict, fetch_list, target_list,
-> 1306                                    status, run_metadata)
   1307 
   1308     def _prun_fn(session, handle, feed_dict, fetch_list):

KeyboardInterrupt: 

Load the Model with the Best Validation Loss

In [23]:
model.load_weights('saved_models/weights.best.from_scratch.hdf5')

Test the Model

Try out your model on the test dataset of dog images. Ensure that your test accuracy is greater than 1%.

In [24]:
# get index of predicted dog breed for each image in test set
dog_breed_predictions = [np.argmax(model.predict(np.expand_dims(tensor, axis=0))) for tensor in test_tensors]

# report test accuracy
test_accuracy = 100*np.sum(np.array(dog_breed_predictions)==np.argmax(test_targets, axis=1))/len(dog_breed_predictions)
print('Test accuracy: %.4f%%' % test_accuracy)
Test accuracy: 10.7656%

Step 4: Use a CNN to Classify Dog Breeds

To reduce training time without sacrificing accuracy, we show you how to train a CNN using transfer learning. In the following step, you will get a chance to use transfer learning to train your own CNN.

Obtain Bottleneck Features

In [23]:
bottleneck_features = np.load('bottleneck_features/DogVGG16Data.npz')
train_VGG16 = bottleneck_features['train']
valid_VGG16 = bottleneck_features['valid']
test_VGG16 = bottleneck_features['test']

Model Architecture

The model uses the the pre-trained VGG-16 model as a fixed feature extractor, where the last convolutional output of VGG-16 is fed as input to our model. We only add a global average pooling layer and a fully connected layer, where the latter contains one node for each dog category and is equipped with a softmax.

In [24]:
VGG16_model = Sequential()
VGG16_model.add(GlobalAveragePooling2D(input_shape=train_VGG16.shape[1:]))
VGG16_model.add(Dense(133, activation='softmax'))

VGG16_model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
global_average_pooling2d_2 ( (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 133)               68229     
=================================================================
Total params: 68,229
Trainable params: 68,229
Non-trainable params: 0
_________________________________________________________________

Compile the Model

In [25]:
VGG16_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])

Train the Model

In [26]:
checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.VGG16.hdf5', 
                               verbose=1, save_best_only=True)

VGG16_model.fit(train_VGG16, train_targets, 
          validation_data=(valid_VGG16, valid_targets),
          epochs=20, batch_size=20, callbacks=[checkpointer], verbose=1)
Train on 6680 samples, validate on 835 samples
Epoch 1/20
6620/6680 [============================>.] - ETA: 0s - loss: 11.9244 - acc: 0.1266Epoch 00001: val_loss improved from inf to 10.19969, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 286us/step - loss: 11.9210 - acc: 0.1275 - val_loss: 10.1997 - val_acc: 0.2144
Epoch 2/20
6620/6680 [============================>.] - ETA: 0s - loss: 9.2872 - acc: 0.3082Epoch 00002: val_loss improved from 10.19969 to 9.36274, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 243us/step - loss: 9.2861 - acc: 0.3087 - val_loss: 9.3627 - val_acc: 0.3054
Epoch 3/20
6480/6680 [============================>.] - ETA: 0s - loss: 8.7375 - acc: 0.3872Epoch 00003: val_loss improved from 9.36274 to 9.18606, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 241us/step - loss: 8.7393 - acc: 0.3865 - val_loss: 9.1861 - val_acc: 0.3353
Epoch 4/20
6460/6680 [============================>.] - ETA: 0s - loss: 8.5216 - acc: 0.4192Epoch 00004: val_loss improved from 9.18606 to 9.05463, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 242us/step - loss: 8.5248 - acc: 0.4193 - val_loss: 9.0546 - val_acc: 0.3545
Epoch 5/20
6460/6680 [============================>.] - ETA: 0s - loss: 8.3842 - acc: 0.4381Epoch 00005: val_loss improved from 9.05463 to 8.94786, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 241us/step - loss: 8.3931 - acc: 0.4374 - val_loss: 8.9479 - val_acc: 0.3581
Epoch 6/20
6660/6680 [============================>.] - ETA: 0s - loss: 8.2706 - acc: 0.4511Epoch 00006: val_loss did not improve
6680/6680 [==============================] - 2s 242us/step - loss: 8.2724 - acc: 0.4510 - val_loss: 8.9751 - val_acc: 0.3713
Epoch 7/20
6480/6680 [============================>.] - ETA: 0s - loss: 8.1750 - acc: 0.4673Epoch 00007: val_loss improved from 8.94786 to 8.80492, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 242us/step - loss: 8.1858 - acc: 0.4663 - val_loss: 8.8049 - val_acc: 0.3808
Epoch 8/20
6480/6680 [============================>.] - ETA: 0s - loss: 8.1454 - acc: 0.4727Epoch 00008: val_loss improved from 8.80492 to 8.79073, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 239us/step - loss: 8.1412 - acc: 0.4723 - val_loss: 8.7907 - val_acc: 0.3916
Epoch 9/20
6480/6680 [============================>.] - ETA: 0s - loss: 8.1298 - acc: 0.4789Epoch 00009: val_loss improved from 8.79073 to 8.77258, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 239us/step - loss: 8.1023 - acc: 0.4802 - val_loss: 8.7726 - val_acc: 0.3916
Epoch 10/20
6480/6680 [============================>.] - ETA: 0s - loss: 7.9428 - acc: 0.4840Epoch 00010: val_loss improved from 8.77258 to 8.62654, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 239us/step - loss: 7.9503 - acc: 0.4829 - val_loss: 8.6265 - val_acc: 0.3796
Epoch 11/20
6500/6680 [============================>.] - ETA: 0s - loss: 7.8243 - acc: 0.4965Epoch 00011: val_loss improved from 8.62654 to 8.51500, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 239us/step - loss: 7.8059 - acc: 0.4972 - val_loss: 8.5150 - val_acc: 0.3940
Epoch 12/20
6480/6680 [============================>.] - ETA: 0s - loss: 7.7641 - acc: 0.5051Epoch 00012: val_loss did not improve
6680/6680 [==============================] - 2s 237us/step - loss: 7.7671 - acc: 0.5049 - val_loss: 8.5244 - val_acc: 0.4012
Epoch 13/20
6480/6680 [============================>.] - ETA: 0s - loss: 7.7501 - acc: 0.5091Epoch 00013: val_loss did not improve
6680/6680 [==============================] - 2s 239us/step - loss: 7.7439 - acc: 0.5094 - val_loss: 8.6358 - val_acc: 0.3904
Epoch 14/20
6460/6680 [============================>.] - ETA: 0s - loss: 7.7020 - acc: 0.5115Epoch 00014: val_loss improved from 8.51500 to 8.49493, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 240us/step - loss: 7.6895 - acc: 0.5123 - val_loss: 8.4949 - val_acc: 0.4000
Epoch 15/20
6480/6680 [============================>.] - ETA: 0s - loss: 7.5880 - acc: 0.5179Epoch 00015: val_loss improved from 8.49493 to 8.47897, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 242us/step - loss: 7.5931 - acc: 0.5175 - val_loss: 8.4790 - val_acc: 0.3940
Epoch 16/20
6640/6680 [============================>.] - ETA: 0s - loss: 7.4253 - acc: 0.5265Epoch 00016: val_loss improved from 8.47897 to 8.40399, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 244us/step - loss: 7.4322 - acc: 0.5259 - val_loss: 8.4040 - val_acc: 0.4096
Epoch 17/20
6480/6680 [============================>.] - ETA: 0s - loss: 7.3516 - acc: 0.5346Epoch 00017: val_loss improved from 8.40399 to 8.35653, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 241us/step - loss: 7.3537 - acc: 0.5344 - val_loss: 8.3565 - val_acc: 0.4036
Epoch 18/20
6440/6680 [===========================>..] - ETA: 0s - loss: 7.3247 - acc: 0.5390Epoch 00018: val_loss improved from 8.35653 to 8.31001, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 243us/step - loss: 7.3331 - acc: 0.5385 - val_loss: 8.3100 - val_acc: 0.4156
Epoch 19/20
6460/6680 [============================>.] - ETA: 0s - loss: 7.2582 - acc: 0.5421Epoch 00019: val_loss improved from 8.31001 to 8.19726, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 241us/step - loss: 7.2547 - acc: 0.5424 - val_loss: 8.1973 - val_acc: 0.4240
Epoch 20/20
6580/6680 [============================>.] - ETA: 0s - loss: 7.2033 - acc: 0.5470Epoch 00020: val_loss did not improve
6680/6680 [==============================] - 2s 242us/step - loss: 7.2114 - acc: 0.5464 - val_loss: 8.2249 - val_acc: 0.4240
Out[26]:
<keras.callbacks.History at 0x7f3fd1508128>

Load the Model with the Best Validation Loss

In [27]:
VGG16_model.load_weights('saved_models/weights.best.VGG16.hdf5')

Test the Model

Now, we can use the CNN to test how well it identifies breed within our test dataset of dog images. We print the test accuracy below.

In [28]:
# get index of predicted dog breed for each image in test set
VGG16_predictions = [np.argmax(VGG16_model.predict(np.expand_dims(feature, axis=0))) for feature in test_VGG16]

# report test accuracy
test_accuracy = 100*np.sum(np.array(VGG16_predictions)==np.argmax(test_targets, axis=1))/len(VGG16_predictions)
print('Test accuracy: %.4f%%' % test_accuracy)
Test accuracy: 42.9426%

Predict Dog Breed with the Model

In [29]:
from extract_bottleneck_features import *

def VGG16_predict_breed(img_path):
    '''
    INPUT:
    img_path - image path 
    
    OUTPUT:
    returns predicted dog breeds
    '''
    # extract bottleneck features
    bottleneck_feature = extract_VGG16(path_to_tensor(img_path))
    # obtain predicted vector
    predicted_vector = VGG16_model.predict(bottleneck_feature)
    # return dog breed that is predicted by the model
    return dog_names[np.argmax(predicted_vector)]

Step 5: Create a CNN to Classify Dog Breeds (using Transfer Learning)

You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.

In Step 4, we used transfer learning to create a CNN using VGG-16 bottleneck features. In this section, you must use the bottleneck features from a different pre-trained model. To make things easier for you, we have pre-computed the features for all of the networks that are currently available in Keras:

The files are encoded as such:

Dog{network}Data.npz

where {network}, in the above filename, can be one of VGG19, Resnet50, InceptionV3, or Xception. Pick one of the above architectures, download the corresponding bottleneck features, and store the downloaded file in the bottleneck_features/ folder in the repository.

(IMPLEMENTATION) Obtain Bottleneck Features

In the code block below, extract the bottleneck features corresponding to the train, test, and validation sets by running the following:

bottleneck_features = np.load('bottleneck_features/Dog{network}Data.npz')
train_{network} = bottleneck_features['train']
valid_{network} = bottleneck_features['valid']
test_{network} = bottleneck_features['test']
In [30]:
### TODO: Obtain bottleneck features from another pre-trained CNN.
bottleneck_features_rn = np.load('bottleneck_features/DogResnet50Data.npz')
train_resnet50 = bottleneck_features_rn['train']
valid_resnet50 = bottleneck_features_rn['valid']
test_resnet50 = bottleneck_features_rn['test']

(IMPLEMENTATION) Model Architecture

Create a CNN to classify dog breed. At the end of your code cell block, summarize the layers of your model by executing the line:

    <your model's name>.summary()

Question 5: Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.

Answer: For the final CNN architecture, we use transfer learning. As the features are already pretrained, we can only add a Global Average Pooling 2D - Layer and a Dense Layer.

This takes full advantage of the pre-trained model and keeps it really quick.

In [45]:
resnet50_model = Sequential()
resnet50_model.add(GlobalAveragePooling2D(input_shape=train_resnet50.shape[1:]))
resnet50_model.add(Dense(133, activation='softmax'))

resnet50_model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
global_average_pooling2d_17  (None, 2048)              0         
_________________________________________________________________
dense_4 (Dense)              (None, 133)               272517    
=================================================================
Total params: 272,517
Trainable params: 272,517
Non-trainable params: 0
_________________________________________________________________

(IMPLEMENTATION) Compile the Model

In [46]:
### TODO: Compile the model.
resnet50_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])

(IMPLEMENTATION) Train the Model

Train your model in the code cell below. Use model checkpointing to save the model that attains the best validation loss.

You are welcome to augment the training data, but this is not a requirement.

In [48]:
### TODO: Train the model.
checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.resnet50.hdf5', 
                               verbose=1, save_best_only=True)

resnet50_model.fit(train_resnet50, train_targets, 
          validation_data=(valid_resnet50, valid_targets),
          epochs=20, batch_size=20, callbacks=[checkpointer], verbose=1)
Train on 6680 samples, validate on 835 samples
Epoch 1/20
6460/6680 [============================>.] - ETA: 0s - loss: 0.0041 - acc: 0.9986Epoch 00001: val_loss improved from inf to 0.94763, saving model to saved_models/weights.best.resnet50.hdf5
6680/6680 [==============================] - 1s 221us/step - loss: 0.0048 - acc: 0.9985 - val_loss: 0.9476 - val_acc: 0.8299
Epoch 2/20
6440/6680 [===========================>..] - ETA: 0s - loss: 0.0048 - acc: 0.9986Epoch 00002: val_loss did not improve
6680/6680 [==============================] - 1s 219us/step - loss: 0.0046 - acc: 0.9987 - val_loss: 0.9988 - val_acc: 0.8299
Epoch 3/20
6440/6680 [===========================>..] - ETA: 0s - loss: 0.0044 - acc: 0.9989Epoch 00003: val_loss did not improve
6680/6680 [==============================] - 1s 220us/step - loss: 0.0043 - acc: 0.9990 - val_loss: 0.9594 - val_acc: 0.8299
Epoch 4/20
6660/6680 [============================>.] - ETA: 0s - loss: 0.0044 - acc: 0.9986Epoch 00004: val_loss did not improve
6680/6680 [==============================] - 1s 221us/step - loss: 0.0044 - acc: 0.9987 - val_loss: 0.9723 - val_acc: 0.8287
Epoch 5/20
6600/6680 [============================>.] - ETA: 0s - loss: 0.0039 - acc: 0.9989Epoch 00005: val_loss did not improve
6680/6680 [==============================] - 1s 221us/step - loss: 0.0039 - acc: 0.9990 - val_loss: 0.9681 - val_acc: 0.8263
Epoch 6/20
6440/6680 [===========================>..] - ETA: 0s - loss: 0.0053 - acc: 0.9988Epoch 00006: val_loss did not improve
6680/6680 [==============================] - 1s 219us/step - loss: 0.0051 - acc: 0.9988 - val_loss: 1.0213 - val_acc: 0.8299
Epoch 7/20
6440/6680 [===========================>..] - ETA: 0s - loss: 0.0049 - acc: 0.9988Epoch 00007: val_loss did not improve
6680/6680 [==============================] - 1s 219us/step - loss: 0.0048 - acc: 0.9988 - val_loss: 1.0039 - val_acc: 0.8168
Epoch 8/20
6440/6680 [===========================>..] - ETA: 0s - loss: 0.0056 - acc: 0.9989Epoch 00008: val_loss did not improve
6680/6680 [==============================] - 1s 219us/step - loss: 0.0054 - acc: 0.9990 - val_loss: 1.0392 - val_acc: 0.8311
Epoch 9/20
6420/6680 [===========================>..] - ETA: 0s - loss: 0.0052 - acc: 0.9986Epoch 00009: val_loss did not improve
6680/6680 [==============================] - 1s 219us/step - loss: 0.0050 - acc: 0.9987 - val_loss: 1.0684 - val_acc: 0.8156
Epoch 10/20
6440/6680 [===========================>..] - ETA: 0s - loss: 0.0046 - acc: 0.9988Epoch 00010: val_loss did not improve
6680/6680 [==============================] - 1s 219us/step - loss: 0.0045 - acc: 0.9988 - val_loss: 1.0904 - val_acc: 0.8299
Epoch 11/20
6460/6680 [============================>.] - ETA: 0s - loss: 0.0060 - acc: 0.9985Epoch 00011: val_loss did not improve
6680/6680 [==============================] - 1s 220us/step - loss: 0.0058 - acc: 0.9985 - val_loss: 1.0925 - val_acc: 0.8275
Epoch 12/20
6480/6680 [============================>.] - ETA: 0s - loss: 0.0058 - acc: 0.9991Epoch 00012: val_loss did not improve
6680/6680 [==============================] - 1s 218us/step - loss: 0.0056 - acc: 0.9991 - val_loss: 1.1119 - val_acc: 0.8263
Epoch 13/20
6500/6680 [============================>.] - ETA: 0s - loss: 0.0052 - acc: 0.9986Epoch 00013: val_loss did not improve
6680/6680 [==============================] - 1s 218us/step - loss: 0.0050 - acc: 0.9987 - val_loss: 1.1044 - val_acc: 0.8287
Epoch 14/20
6480/6680 [============================>.] - ETA: 0s - loss: 0.0035 - acc: 0.9991Epoch 00014: val_loss did not improve
6680/6680 [==============================] - 1s 218us/step - loss: 0.0034 - acc: 0.9991 - val_loss: 1.1611 - val_acc: 0.8132
Epoch 15/20
6480/6680 [============================>.] - ETA: 0s - loss: 0.0059 - acc: 0.9985Epoch 00015: val_loss did not improve
6680/6680 [==============================] - 1s 219us/step - loss: 0.0057 - acc: 0.9985 - val_loss: 1.1082 - val_acc: 0.8216
Epoch 16/20
6440/6680 [===========================>..] - ETA: 0s - loss: 0.0026 - acc: 0.9989Epoch 00016: val_loss did not improve
6680/6680 [==============================] - 1s 219us/step - loss: 0.0043 - acc: 0.9987 - val_loss: 1.1498 - val_acc: 0.8216
Epoch 17/20
6440/6680 [===========================>..] - ETA: 0s - loss: 0.0044 - acc: 0.9984Epoch 00017: val_loss did not improve
6680/6680 [==============================] - 1s 219us/step - loss: 0.0042 - acc: 0.9985 - val_loss: 1.1855 - val_acc: 0.8240
Epoch 18/20
6500/6680 [============================>.] - ETA: 0s - loss: 0.0040 - acc: 0.9989Epoch 00018: val_loss did not improve
6680/6680 [==============================] - 1s 218us/step - loss: 0.0043 - acc: 0.9988 - val_loss: 1.1770 - val_acc: 0.8240
Epoch 19/20
6440/6680 [===========================>..] - ETA: 0s - loss: 0.0042 - acc: 0.9991Epoch 00019: val_loss did not improve
6680/6680 [==============================] - 1s 220us/step - loss: 0.0040 - acc: 0.9991 - val_loss: 1.1863 - val_acc: 0.8240
Epoch 20/20
6480/6680 [============================>.] - ETA: 0s - loss: 0.0045 - acc: 0.9986Epoch 00020: val_loss did not improve
6680/6680 [==============================] - 1s 218us/step - loss: 0.0044 - acc: 0.9987 - val_loss: 1.2294 - val_acc: 0.8204
Out[48]:
<keras.callbacks.History at 0x7f3fd17615c0>

(IMPLEMENTATION) Load the Model with the Best Validation Loss

In [49]:
### TODO: Load the model weights with the best validation loss.
resnet50_model.load_weights('saved_models/weights.best.resnet50.hdf5')

(IMPLEMENTATION) Test the Model

Try out your model on the test dataset of dog images. Ensure that your test accuracy is greater than 60%.

In [50]:
### TODO: Calculate classification accuracy on the test dataset.
# get index of predicted dog breed for each image in test set
resnet50_predictions = [np.argmax(resnet50_model.predict(np.expand_dims(feature, axis=0))) for feature in test_resnet50]

# report test accuracy
test_accuracy = 100*np.sum(np.array(resnet50_predictions)==np.argmax(test_targets, axis=1))/len(resnet50_predictions)
print('Test accuracy: %.4f%%' % test_accuracy)
Test accuracy: 81.2201%

(IMPLEMENTATION) Predict Dog Breed with the Model

Write a function that takes an image path as input and returns the dog breed (Affenpinscher, Afghan_hound, etc) that is predicted by your model.

Similar to the analogous function in Step 5, your function should have three steps:

  1. Extract the bottleneck features corresponding to the chosen CNN model.
  2. Supply the bottleneck features as input to the model to return the predicted vector. Note that the argmax of this prediction vector gives the index of the predicted dog breed.
  3. Use the dog_names array defined in Step 0 of this notebook to return the corresponding breed.

The functions to extract the bottleneck features can be found in extract_bottleneck_features.py, and they have been imported in an earlier code cell. To obtain the bottleneck features corresponding to your chosen CNN architecture, you need to use the function

extract_{network}

where {network}, in the above filename, should be one of VGG19, Resnet50, InceptionV3, or Xception.

In [51]:
dog_names[:3]
Out[51]:
['ages/train/001.Affenpinscher',
 'ages/train/002.Afghan_hound',
 'ages/train/003.Airedale_terrier']
In [52]:
dog_names[-3:]
Out[52]:
['ages/train/131.Wirehaired_pointing_griffon',
 'ages/train/132.Xoloitzcuintli',
 'ages/train/133.Yorkshire_terrier']
In [76]:
### TODO: Write a function that takes a path to an image as input
### and returns the dog breed that is predicted by the model.
from extract_bottleneck_features import extract_Resnet50

def ResNet50_predict_breed(img_path):
    '''
    INPUT:
    img_path - image path 
    
    OUTPUT:
    returns the name of the predicted dog breed found in the image
    '''
    # extract bottleneck features
    bottleneck_feature = extract_Resnet50(path_to_tensor(img_path))
    # obtain predicted vector
    predicted_vector = resnet50_model.predict(bottleneck_feature)
    # return dog breed that is predicted by the model
    return dog_names[np.argmax(predicted_vector)][15:]
In [77]:
dog_files_short[2]
Out[77]:
'../../../data/dog_images/train/088.Irish_water_spaniel/Irish_water_spaniel_06014.jpg'
In [78]:
ResNet50_predict_breed(dog_files_short[2])
Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.2/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5
94658560/94653016 [==============================] - 1s 0us/step
Out[78]:
'Irish_water_spaniel'
In [79]:
import matplotlib.image as mpimg

img = mpimg.imread(dog_files_short[2])
imgplot = plt.imshow(img)
plt.show()

Step 6: Write your Algorithm

Write an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,

  • if a dog is detected in the image, return the predicted breed.
  • if a human is detected in the image, return the resembling dog breed.
  • if neither is detected in the image, provide output that indicates an error.

You are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the face_detector and dog_detector functions developed above. You are required to use your CNN from Step 5 to predict dog breed.

A sample image and output for our algorithm is provided below, but feel free to design your own user experience!

Sample Human Output

This photo looks like an Afghan Hound.

(IMPLEMENTATION) Write your Algorithm

In [1]:
### TODO: Write your algorithm.
### Feel free to use as many code cells as needed.
import matplotlib.image as mpimg

def image_breed_detector(img_path):
    '''
    INPUT:
    img_path - image path 
    
    Description:
    This function will first check if a dog is present in the picture.
    If so, it will output the predicted breed.
    If no dog was detected, the function will check for a human face.
    If a human face is found, it will output a prediction for a resembling dog breed.
    In case no dog or human face are detected, an error message gets diplayed.
    In any case, the given image will be presented back to the user as a reference.
    '''
    # dog_detector(img_path) is True if there is a dog in the picture, and false if not
    if dog_detector(img_path):
        dog_breed = ResNet50_predict_breed(img_path)
        print(f"This picture shows a {dog_breed}.")
        
    # since the picture does not appear to show a dog, let's see if it shows a human face
    # face_detector(img_path)  
    # if a human face got detected in the picture, and false if not
    elif face_detector(img_path):
        dog_breed = ResNet50_predict_breed(img_path)
        print(f"This picture resembles a {dog_breed}.")
        
    else: 
        print("Please choose an image of a human or a dog.")
    
    img = mpimg.imread(img_path)
    imgplot = plt.imshow(img)
    plt.show()

Step 7: Test Your Algorithm

In this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that you look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog?

(IMPLEMENTATION) Test Your Algorithm on Sample Images!

Test your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images.

Question 6: Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.

Answer: The output is actually better than what I expected.

There are no false positives in all the cat pictures. Not one cat was identified as a dog or a human face.

In case of two different species being present in one of the pictures (like the one with a dog and the cat or the one with the dog and the man), the algorithm identified the dog both times. I find this somewhat impressive, since the dog in the dog and cat picture is a lesser known breed that is not completely in the picture and the photo only shows the side of the dog. In the picture with a man and a dog, the Dog Detector detects the dog, so that the face won't be detected. I don't think that this is a Dachshund, but the way the dogs face gets compressed in the picture, the face does resemble some of those stretched face features of a Dachshund.

Improvements:

  1. I think it would be cool to change the human identifier to a not so face-centric algorithm. This human face detector is really good at detecting nice, clear, frontal pictures of humans. But it is not good at detecting any other kind of human picture, be it a side shot or a whole body shot.
  2. It would be cool to extend the algorithm with a cat breed identifier - quiet similar to the dog breed classifier approach, also using a pre-trained CNN, to identify different cat breeds in a picture.
  3. Ultimately it would be neat if those two or three seperate algorithms would give a common report of, for example:

In this picture a human, a cat, and 3 dogs were found.

In case of a human present in the picture, it would also tell the closest dog breed resemblance as well as the closest cat breed resemblance.

In [81]:
## TODO: Execute your algorithm from Step 6 on
## at least 6 images on your computer.
## Feel free to use as many code cells as needed.
image_breed_detector("./pixabay-pics/cat-forest_greens_640.jpg")
Please choose an image of a human or a dog.
In [82]:
image_breed_detector("./pixabay-pics/cat-front_face_640.jpg")
Please choose an image of a human or a dog.
In [83]:
image_breed_detector("./pixabay-pics/cat-looking_up_640.jpg")
Please choose an image of a human or a dog.
In [84]:
image_breed_detector("./pixabay-pics/cat-night_640.jpg")
Please choose an image of a human or a dog.
In [85]:
image_breed_detector("./pixabay-pics/cat-on_meadow_640.jpg")
Please choose an image of a human or a dog.
In [86]:
image_breed_detector("./pixabay-pics/cat-street_640.jpg")
Please choose an image of a human or a dog.
In [87]:
image_breed_detector("./pixabay-pics/cat-thinking_pose_640.jpg")
Please choose an image of a human or a dog.

These replies are correct, since these pictures show cats, not humans or dogs.

In [88]:
image_breed_detector("./pixabay-pics/cat_and_dog_640.jpg")
This picture shows a Papillon.

This picture is somewhat tricky, since it shows a cat and a dog.

The dog in the picture could be a silky terrier (this was the first prediction run) (it's very hairy, too), but I don't know. I don't think that it is a Papillon (second prediction run).

In [89]:
image_breed_detector("./pixabay-pics/dog-berner_640.jpg")
This picture shows a Bernese_mountain_dog.
In [90]:
image_breed_detector("./pixabay-pics/dog-corgi_640.jpg")
This picture shows a Pembroke_welsh_corgi.
In [91]:
image_breed_detector("./pixabay-pics/dog-maltese_640.jpg")
This picture shows a Maltese.
In [92]:
image_breed_detector("./pixabay-pics/dog-puppy_beagle_640.jpg")
Please choose an image of a human or a dog.

I choose this picture, because I was curious if this dog could be detected despite the big tennis ball in front of it's face and the puppy stage.

Apparently, the "Dog Detector" incorrectly says that this is not a dog.

It gets correctly classified as a beagle by the ResNet50-prediction, though.

In [93]:
ResNet50_predict_breed("./pixabay-pics/dog-puppy_beagle_640.jpg")
Out[93]:
'Beagle'
In [94]:
image_breed_detector("./pixabay-pics/dog-puppy_unknown_640.jpg")
This picture shows a Kuvasz.

I think that this is not a Labrador (that was the first prediction run), but it's a little bit tough to tell, since this is a picture of a puppy.

I am also not sure about the Kuvasz (result of the second prediction run).

In [95]:
image_breed_detector("./pixabay-pics/human-and-dog_640.jpg")
This picture shows a Dachshund.

This picture is of a man with his dog. The Dog Detector detects the dog, so that the face won't be detected.

I don't think that this is a Dachshund, but the way the dogs face gets compressed in the picture, the face does resemble some of those stretched face features of a Dachshund.

In [96]:
image_breed_detector("./pixabay-pics/human_example_afghan.jpg")
This picture resembles a English_toy_spaniel.

I tried using your example picture from the explanation above, and I got a different resembling breed back.

Looking at some google images of both breeds, I do think that this picture much more closely resembles a Cavalier King Charles Spaniel and not so much an Afghan.

On the second prediction run, the result was "English Toy Spaniel". To me, those two types of Spaniel look really close - I guess my algorithm is only sure that this must be a Spaniel :-).

In [97]:
image_breed_detector("./pixabay-pics/human-street_640.jpg")
Please choose an image of a human or a dog.

I found that the Human Face Detector is really a Human Face detector.

This picture clearly has a human in it, but the face is not very clear.

In [98]:
image_breed_detector("./pixabay-pics/human-curly_reddish_blond_640.jpg")
This picture resembles a English_toy_spaniel.
In [99]:
image_breed_detector("./pixabay-pics/human-long_brown_straight_hair_640.jpg")
This picture resembles a English_springer_spaniel.
In [100]:
image_breed_detector("./pixabay-pics/human_hair_640.jpg")
This picture resembles a English_toy_spaniel.

This is quite interesting:

At the first prediction run, all three female faces were detected as resembling beagles.

On the second prediction run, they now all three converted to English Toy or English Springer Spaniels :-).

In [101]:
image_breed_detector("./pixabay-pics/human-model_640.jpg")
This picture resembles a English_springer_spaniel.
In [102]:
image_breed_detector("./pixabay-pics/human-umbrella_640.jpg")
This picture resembles a Dogue_de_bordeaux.

The results for the human images are interesting.

So, women look like Beagles (or English Toy / Springer Spaniels after the second prediction run)?

A man with a clean haircut looks like a silky terrier (or like an english springer spaniel on the second run)?

The facial features of the guy holding an umbrella somewhat resemble those of a Dogue de Bordeaux.